Psychohistory: On Isaac Asimov, Big Data, and Surveillance Capitalism

Design Narratives
20 min readJun 11, 2020

Science fiction often falls short in its ability to predict future technological developments. However, for the multitude of theories pertaining to future technology which have proven false — from cities on the moon, to teleportation — there are a few pressing examples where it has foregrounded now-critical issues, conceived long before they became pertinent to present day society. Somewhat telling, is that these critical issues are often closely linked to the sociological impact of technology on present day society in ways that cities on the moon and teleportation never could be.

It may be worth reflecting that authors of science fiction find themselves adept in their ability to recognise social change by way of their fraternity with the practice of design fiction and the essentially speculative nature with which it is imbued. As Julian Bleecker affirms, ‘Design fiction does all of the unique things that science-fiction can do as a reflective, written story telling practice. Like some forms of science fiction, it speculates about a near future tomorrow, extrapolating from today. Design fiction is the cousin of science fiction. It is concerned [with] exploring multiple potential futures…[creating] opportunities for reflection.’ (2009: 8)

In 1984 (2013) — first published in 1949 — George Orwell depicts a society in which every individual is closely surveilled within the physical world by government, with the assistance of indoctrinated citizens. The idea of ‘Big Brother’, as Orwell terms the surveilling entity, has permeated global society in the successive decades like no other cultural motif. This is a permeation supported by revelations of NSA eavesdropping on telephone calls in the wake of 9/11, (Savage and Lichtblau, 2016) as well as recent confirmation that governments in the United States and United Kingdom have conducted more widespread surveillance of their populations for decades via the NSA and GCHQ, respectively. (Macaskill and Dance, 2013; Bellamy Foster and McChesney, 2014)

The panoptic imperative originally conceived by Jeremy Bentham in the 18th Century (Božovič, 1995) and later dissected by Foucault (McKinlay and Starkey, 1998) and Zuboff, (1989) is all too pertinent three hundred years after it was first realised. As Lyon highlights, ‘The Big Brother trope did not in its original incarnation refer to anything outside the nation-state (such as the commercial or internet surveillance that is prevalent today). Nor did Orwell guess at the extent to which the “telescreen” would be massively enhanced by developments first in microelectronics and then in communications including global TV and searchable databases. But it would be naïve to imagine that Big Brother type threats are somehow a thing of the past.’ (2003: 33)

Elsewhere, in Brave New World (2007) — first published in 1931 — Aldous Huxley presents the concept of ‘neo-pavlovian conditioning’, wherein every individual in society is grown according to a caste system, instilled with traits which align with their expected purpose and — most pertinent to our social media-saturated lifestyles — social status in life. Fast-forward to the 21st Century, and MIT Technology Review reports that gene-edited humans have been successfully produced for the first time in vitro, using CRISPR gene-editing with the intent of achieving genetic resistance to HIV and other diseases. (Regalado, 2018)

Most importantly — particularly in the context of this essay — in 1942, a lesser known contemporary of Orwell and Huxley, Isaac Asimov, penned the increasingly popular novel Foundation (2016). It is here that Asimov presents ‘psychohistory’ — at the time, a little-known amalgam within the social sciences — foregrounding theories of groupthink, behavioural manipulation and herd behaviour on a global scale. As Bohannon acknowledges, ‘With powerful computers and gargantuan data sets, [Asimov] imagined, researchers would forecast not just elections, but the rise and fall of empires. A lifetime later, the computers and the data Asimov envisioned are becoming reality.’ (2017: 470) What Bohannon alludes to — and Asimov envisioned in 1942 — is an amicable version of what society now recognises as behavioural economics and its dependency on big data, that is, ‘Extremely large data sets that may be analysed computationally to reveal patterns, trends, and associations, especially relating to human behaviour and interactions.’ (Oxford Dictionaries, 2019)

These patterns, trends and associations exist as a byproduct within the data generated by our daily lives — data which is ready to be harvested by those corporations with both the financial and technological means, and the economic motivation to do so. As Galič, et al. expose, ‘What is called big data is the foundational component of [a] new economic logic, which is based on prediction and its monetization — selling access to the real-time flow of people’s daily life in order to directly influence and modify their behaviour for profit.’ (2017: 25) The corporations who partake in this exploit for financial gain, we can term ‘surveillance capitalists’. (Zuboff, 2015) That we don’t yet have a name for those who employ practices of surveillance capitalism for political gain, should be nothing less than disconcerting, for it reflects the clandestine nature of an increasingly prolific activity.

It is worth noting that, in the context of this essay, I make use of Shoshana Zuboff’s definition of surveillance capitalism, which she defines as, ‘A new economic order that claims human experience as free raw material for hidden commercial practices of extraction, prediction and sales.’ (2019: end papers) Through this essay I contend that the very nature of surveillance capitalism has precarious ramifications for the economic and political freedoms we currently enjoy as users of the internet, and citizens of our respective nation-state. Furthermore, I investigate the potential evidence for this prediction and influence imperative to move from a purely commercial incentive, to a more fateful, political motive.

In order to address surveillance capitalism directly, it is important that we acquaint ourselves with the means by which internet corporations, and a growing list of non-internet actors achieve the ability to predict and influence our behaviour.

Neither the prediction nor influence of behaviour would be possible without the financial capacity to do so. With the sustained commercialisation of the internet through the 1990s, it was widely recognised by the close of the century that a new technological era had dawned. Whilst Western populations saw the various benefits of an increasingly accessible internet as incentive to buy a computer, business recognised the opportunity to reach more customers, and in turn, increase profits. As Curran, et al. note, ‘The growing influence of commerce seemed merely to extend the benefits of [the internet] to more people by simplifying its technical aspects and promoting its use… This was a moment of triumphalism when democracy and capitalism had defeated communism.’ (2012: 41)

No company simplified the technical aspects and promoted internet use as much as the now omniscient Google. Founded in 1998 — according to Zuboff, (2019) — Google would languish through years of unprofitability before discovering in 2001 that it could employ the way in which individuals made use of its search engine to increase the likelihood that these same individuals would click on an advertisement. The result was a sudden increase in revenue via its AdWords advertising service, but more importantly, this revelation has come to be recognised as the advent of surveillance capitalism. As then-chief economist at Google, Hal Varian explains, ‘Data extraction and analysis — is what everyone is talking about when they talk about big data.…Nowadays we hear a lot about “predictive analytics”, “data mining”, and “data science”. The techniques from these subjects, along with some good old-fashioned statistics and econometrics have allowed for deeper analysis of these vast data sets, enabled by computer­-mediated transactions.’ (Varian, 2014)

As an economist and then-employee of Google we can’t fault Varian for his devoted vindication of surveillance capitalism. However others are more cautious. Zuboff, for instance, foregrounds the danger of data extraction and analysis arguing, ‘Although surveillance capitalism does not abandon established capitalist “laws”…earlier dynamics now operate in the [additional] context of a new logic of accumulation.’ (2019: 66) Put succinctly, surveillance capitalism doesn’t simply extract financial gain from the individual, it also accumulates, and capitalises upon the individual’s behaviour, thereby predicating further capitalism as a result of the personally targeted advertising made possible by the analysis of behaviour.

As with many technological advances over the centuries, we must recognise that Google would not have gained the strength it has without the social zeitgeist to match the incentives of its surveillance capitalism. It was in the wake of the September 11, 2001 terrorist attacks in the United States — the same year in which Google finally became profitable — that governments around the world suddenly felt the urgent necessity for unfettered surveillance. Whilst it would be a stretch to argue that these attacks directly correlate with Google’s profitability, it wasn’t long before the US government — namely the NSA and CIA — saw an opportunity in Google’s ability to capture and analyse vast amounts of data. (Bellamy Foster and McChesney, 2014)

As Zuboff asserts, ‘The elective affinity between public intelligence agencies and the fledgling surveillance capitalist Google blossomed in the heat of emergency.…The 9/11 attacks transformed the government’s interest in Google.…Both institutions craved certainty…[this] contributed to the fertile habitat in which the surveillance capitalism mutation would be nurtured to prosperity.’ (2019: 115) In short, the CIA would allow Google to continue collecting behavioural data unfettered by any political interference, provided Google cooperated with government requests for information. (Macaskill and Dance, 2013)

This affinity between government agencies and Google is further recognised by Lyon who reflects, ‘Big Data represents a confluence of commercial and governmental interests.…National security is a business goal as much as a political one and there is a revolving door between the two in the world of surveillance practices.’ (2014: 9) Unfortunately neither Google nor the CIA were ever happy to simply benefit from an agreement of mutual assistance in the wake of 9/11. It is noted that both Google and the CIA went on to operate as venture capitalists, funding tech start-ups who promised further potential to increase each entity’s capture of big data. (Zuboff, 2019)

Zuboff highlights the funding of Recorded Future: ‘In 2009 Google Ventures and In-Q-Tel [the CIA’s venture capital front] both invested in…Recorded Future, [a business which] monitors every aspect of the web in real time in order to predict future events.’ (2019: 117) A Wired Magazine article on the development reports that, ‘[Recorded Future]…scours tens of thousands of websites, blogs and Twitter accounts to find the relationships between people, organizations, actions and incidents — both present and still-to-come.…The investments are bound to be fodder for critics of Google, who already see the search giant as overly cozy with the U.S. government.’ (Shachtman, 2010)

One could argue that Recorded Future’s intent — to predict future events through the use of big data — reflects the ideals of Asimov’s ‘psychohistory’ as closely as Orwell’s ‘Big Brother’ reflects NSA eavesdropping, or Huxley’s ‘neo-pavlovian conditioning’ reflects CRISPR gene editing. However, unlike Asimov’s benevolent ‘psychohistory’, the unfortunate reality is that with funding from the CIA and Google, we can never be certain that Recorded Future’s business model will be used positively, in a way that is solely to the benefit of society.

I would posit that this development should be regarded as among the first examples of a political entity — the CIA — utilising the internet as a means to capture behavioural data, for the purpose of predicting the intentions of the human population. That these desires are mirrored by surveillance capitalists like Google raises further questions about the relationship between business and political intent.

Highlighting just how formidable the connection between politics and surveillance capitalism has become, Zuboff reflects, ‘Google demonstrated [that] the same predictive knowledge derived from its behavioural surplus that had made the surveillance capitalists wealthy could also help candidates win elections.…Beginning with the 2008 Obama presidential campaign.…[then-Google CEO, Eric] Schmidt had a leading role in organising teams and
…[implementing] cutting-edge data strategies…with the science of behavioural prediction.’ (2019: 122) In the context of this development, it is clear that it had been the behavioural surplus collected by Google which made it possible to target US voters so specifically in the 2008 presidential campaign.

As we have since come to learn, by the time of the 2016 US presidential election, this technology had been further developed not only to ascertain undecided voters, but target them with tailored messages — the tone of which could be amended to suit the individual’s emotional disposition — urging them to vote in a way which would swing the election in now-president Donald Trump’s favour. (Hern, 2018) This development came from the now-notorious political consultancy Cambridge Analytica. However it was not achieved using either Google-obtained behavioural data, nor via its advertising. In this case, it was Facebook which would prove to be both the source of behavioural data capture, and advertising platform, as Cambridge Analytica created a business out of behavioural micro-targeting for political gain.

Then-CEO of Cambridge Analytica, Alexander Nix acknowledges, ‘Probably more important [than demographics], are psychographics. That is, an understanding of your personality.…It is personality that drives behaviour…[and] obviously influences how you vote. By having hundreds…of thousands of American’s undertake [a Facebook] survey, we were able to form a model to predict the personality of every single adult in the United States of America.…We have somewhere close to four or five thousand data points on every adult in the United States.’ (Concordia, 2016)

That the true purpose of Cambridge Analytica’s apparently innocent quiz wasn’t made explicit to those individuals who partook in it, reveals a lot about the nature of surveillance capitalism. For the practice to be effective, harvested behaviour must be as natural as possible, so as to reduce the potential of misinterpreting the individual and producing inaccurate data. To this extent, Facebook’s own Terms of Service delineate that users must, ‘Use the same name that [they] use in everyday life; provide accurate information about [themselves]; [and] create only one account ([their] own).’ (Facebook, 2018)

It is Zuboff who recognises that in order for what she calls the ‘dispossession cycle’ (2019) — that is, the process in which behavioural surplus is extracted from individuals — to be effective, individuals often cannot be fully aware of the extent to which their every online activity is being harvested for capital by internet corporations and those businesses who make use of the extracted data. Bratton furthers, ‘Every time anyone makes a search on Google and clicks on a particular result or advertisement, [they are] also retraining the platform’s algorithmic intelligence, a tiny bit each time, helping it to make incrementally more precise predictions of User intention, intuition, desire, and demand.’ (2015: 137; original emphasis)

It is then interesting that, according to Pew Research Centre, Facebook is becoming more adept at extracting behavioural data in order to extrapolate specifically upon its users’ political persuasion. (Mitchell, et al., 2014) Hitlin and Rainie report that, ‘It is relatively common for Facebook to assign political labels to its users.…(51%) of those [surveyed] are given such a label.…Among those who are assigned a label on their political views…(73%) say the listing very accurately or somewhat accurately describes their views.’ (Hitlin and Rainie, 2019) Furthermore, they go on to acknowledge that, ‘Social media users say it would be very or somewhat easy for these platforms to determine their race or ethnicity (84%)…their political affiliation (71%) or their religious beliefs (65%).’ (Hitlin and Rainie, 2019)

Given that race, ethnicity, and religion are such important factors in politics around the world, these statistics suggest not only that Facebook — with its 2.32 billion active monthly users (Statista, 2019) — is more than capable of affecting political persuasion within the public on a global scale, but that it can effectively align this political persuasion along racial, ethnic, and religious voting blocks. With this much detail, it may even be able to extrapolate upon gathered data and foresee the outcome of an election weeks in advance. The potential for this electoral foresight is compounded by reports in 2014 (Flynn, 2014; Gold, 2014) that Facebook would supply ABC News and BuzzFeed with statistical data on the political leanings of its users in the run up to the 2016 presidential election. In doing so, Facebook has openly acknowledged that, if nothing else, at the very least, the platform is more than capable of ascertaining the political pulse of a nation.

With this in mind, the question I am inclined to ask is whether there is an incentive within Facebook to affect this political persuasion. I would argue, based on the extent of Facebook’s government lobbying in the United States — where, in 2018, it set records for financial expenditure (Brody, 2019) — there is ample incentive within Facebook to sway political persuasion, if not globally, then at least in the US, where the company is both headquartered, and listed on the stock market. Zittrain theorises, ‘Suppose that Mark Zuckerberg personally favours whichever candidate you don’t like. He arranges for a voting prompt to appear within the newsfeeds of tens of millions of active Facebook users…[but] chooses not to [influence] the feeds of users unsympathetic to his views. Such machinations…flip the outcome of our hypothetical election.…The scenario imagined…is an example of digital gerrymandering.’ (Zittrain, 2014)

To those who would argue Facebook is a social network, and cannot affect political persuasion in those individuals who do not make use of its services, I return to its fraternal counterpart Google. As we have established, Google, like Facebook, is reliant on surveillance capitalism. Surveillance capitalism requires both vast amounts of data on an individual, but also great depth of meaning to the data captured — an economy of scale, as well as an economy of scope. (Zuboff, 2019)

Unfortunately, as much of our true selves are often expressed in our daily lives — as opposed to our curated online persona — so too, the most important data is to be harvested from our physical world. Where telemetry was once used to track the movements of endangered animals, the same principle has been reimagined through the Internet of Things (IoT). Aside from using its profusion of online services — on which almost all internet users are reliant to some extent — as a mode of harvesting behavioural data, Google has amassed a repertoire of IoT devices designed to accrue further behavioural data from our offline existence. (Curran, 2018) Indeed part of the reason for the creation of the Android operating system, was to enable Google in gathering data from individuals wherever they might be. As Zuboff notes, ‘Android quickly became [Google’s] second critical supply route for behavioural surplus… [the company’s] insiders had grasped the potential for growth and profit through behavioural surplus and fabrication into prediction products.’ (2019: 133)

What then does this mean for political actors? Surveillance capitalists strive for certainty — derived from behavioural analysis — in order to increase the reliability of its advertising sales. As they further hone their ability to achieve this objective, we can be sure the political determination to make use of surveillance capitalism techniques will follow suit. Zuboff continues, ‘Cambridge Analytica merely reoriented the surveillance capitalist machinery from commercial markets in behavioural futures toward guaranteed outcomes in the political sphere.’ (2019: 281)

What if political entities were one day able to ascertain the political persuasion of a person simply by looking at them? One of the most concerning methods of data capture is the possibility of emotional harvesting. In 2015 a start-up named Realeyes was awarded a €3.6 million grant by the European Commission to develop, ‘A project, codenamed “SEWA: Automatic Sentiment Analysis in the Wild”, [capable of using] automated technology…to read a person’s emotion when they view content and then establish how this relates to how much they liked the content.’ (McEleny, 2015)

As the Realeyes website boasts, ‘We’ve taught computers to read emotions and measure attention using webcams…giving brands an additional layer of data to optimise their marketing campaigns. Our tech is 75% accurate at predicting sales.’ (Realeyes, 2019a) It is not hard to envisage a near future wherein YouTube — a subsidiary of Google — updates its terms of service to detail how users must allow the platform to record them as they view its content. Undertaken via the users’ webcam, the intent would be the corporation’s harvesting of emotional data.

I would argue that this is surveillance capitalism’s ultimate goal. With enough emotional data on an individual, the potential to extrapolate upon how specific content will make a person ‘feel’ would enable Google to tailor advertising content in a way that ensures they are always able to predict an individual’s reaction and — given enough practice — ultimately control how that individual reacts to both content and advertising. In line with Google’s economic imperative, this would certainly result in the individual’s unquestioned purchase of whatever is being advertised to them, but could someday be transmuted to the political stage, not unlike Schmidt’s role in the 2008 US presidential election.

In the Realeyes white paper, CEO Mihkel Jäätma suggests, ‘Having automated this process [of emotional extraction], it can then be scaled up to simultaneously track the emotions of entire audiences.’ (Realeyes, 2019b; my emphasis) In the present political climate of partisan demagoguery, it is not difficult to imagine current president Donald Trump — or any other world leader — at a political rally, receiving prompts and updated content on their teleprompter, notifying them when they are losing rapport with their audience, based on the campaign manager’s ability to emotionally scan the witnessing population.

Zuboff, incensed at the prospect of this, reflects, ‘Surveillance [capitalism]… [now] violates the inner sanctum as machines and their algorithms decide the meaning of my breath and my eyes, my jaw muscles, [and] the hitch in my voice.’ (2019: 291) Only time will tell how we as a society might react to such impending developments. I for one share Zuboff’s apprehension.

I began this essay with a meditation on science fiction’s ability to imagine an existence wherein, with enough data on a population, the future could be predicted. Through this essay I have highlighted that with the commercialisation of the internet, the increase in internet accessibility, and the resulting proliferation of both surveillance capitalism and big data capture, the previously fictional idea of predicting and thereby influencing an individual’s future is in the process of materialising.

As evidenced in the course of this essay — regardless of the extent to which each of us makes use of the internet — the nature of surveillance capitalism, in its endeavour to harvest data from our every decision, action, and now emotion, poses a very real threat to our individual freedoms, particularly with regards to personal austerity and political self-determination. It is Zuboff who asserts that, ‘Surveillance capitalists’ ability to evade our awareness is an essential condition.…We are excluded because we are friction that impedes…surveillance capitalism’s knowledge dominance.…We have no formal control [over surveillance capitalism] because we are not essential to the market action.’ (2019: 328; original emphasis)

The purpose of this essay was not to posit solutions to the predicaments outlined within. Nevertheless, I feel it would be irresponsible of me not to suggest a small number of potential approaches by which society might begin to mount a defence against the ceaseless rendition of our behavioural data for the financial gain of internet corporations, and the clandestine operations of political campaigns.

Zuboff’s quote may signal the first step in working against the perpetual harvest of our data. Collectively, we need to reinstate our importance to the market action she highlights. As individuals this would prove impossible, but as a society, there is the potential that vast numbers can be used to affect the market action in question. We need to remind internet corporations, government bodies and other parties who would use our data, of our essentiality to their economic and political imperatives.

We must demand fairness, accountability and transparency in all areas of behavioural capture and modification — from capitalism to politics — in line with ACM FAT Conference’s objectives. (Association for Computing Machinery, 2019) With regards to government legislation, citizens must demand more stringent laws — similar to that of the European Union’s General Data Protection Regulation (EU GDPR, 2019) — which further data protection for all individuals.

Ultimately, better educating citizens on the means of behavioural capture will enable individuals in making more informed decisions about the internet-enabled devices they use, and the permissions they give to web developers, internet corporations and third parties. It is often regarded that better informed citizens are more capable, more inclined, and better prepared to make demands of those corporations reliant on them. As Nissenbaum concludes, ‘The dominant approach to addressing these concerns and achieving privacy online is a combination of transparency and choice. Often called notice-and-consent, or informed consent, the gist of this approach is to inform website visitors and users of online goods and services, of respective information-flow practices and to provide a choice either to engage or disengage.’ (2011: 34)

I bear witness to the growing materialisation of Asimov’s ‘psychohistory’ in the predatory nature of surveillance capitalism. I live in hope that someday I might read a work of science fiction which can affirm the means by which individuals can altogether wrest control of their data from a system intent on using human nature against itself.


NB: A version of this essay was originally submitted as a coursework deliverable for the Internet, Society, and Economy module at the University of Edinburgh, 2019.


Asimov, I. (2016) Foundation. 1st ed. London: Harper Voyager UK.

Association for Computing Machinery (2019) ACM FAT Conference. Association for Computing Machinery.
Available from: https://fatconference.org/index.html

Bellamy Foster, J. and McChesney, R.W. (2014) Surveillance Capitalism: Monopoly-Finance Capital, the Military-Industrial Complex, and the Digital Age. Monthly Review, Vol. 66 (№3), 1–31.

Bleecker, J. (2009) Design Fiction: A Short Essay on Design, Science, Fact and Fiction. Near Future Laboratory.

Bohannon, J. (2017) The Pulse of the People: Can Internet Data Outdo Costly and Unreliable Polls in Predicting Election Outcomes. Science, Vol. 355 (№6324), 470–472.

Božovič, M., ed. (1995) The Panopticon Writings. 1st ed. London,
New York: Verso.

Bratton, B.H. (2015) The Stack: On Software and Sovereignty. 1st ed. Cambridge, Massachusetts; London, England: MIT Press.

Brody, B. (2019) Google, Facebook Set 2018 Lobbying Records as Tech Scrutiny Intensifies. Bloomberg.
Available from: https://www.bloomberg.com/news/articles/2019-01-22/google-set-2018-lobbying-record-as-washington-techlash-expands

Concordia (2016) Cambridge Analytica — The Power of Big Data and Psychographics. Concordia.
Available from: https://www.youtube.com/watch?v=n8Dd5aVXLCc

Curran, D. (2018) Are You Ready? Here is All the Data Facebook and Google Have on You. The Guardian. Available from: https://www.theguardian.com/commentisfree/2018/mar/28/all-the-data-facebook-google-has-on-you-privacy

Curran, J., Fenton, N. and Freedman, D. (2012) Misunderstanding the Internet. 1st ed. New York, New York: Routledge.

EU GDPR (2019) GDPR Key Changes. EU GDPR.
Available from: https://eugdpr.org/the-regulation/

Facebook (2018) Terms of Service. Facebook.
Available from: https://www.facebook.com/legal/terms

Flynn, K. (2014) Facebook Will Share Users’ Political Leanings with ABC News, BuzzFeed. Huffington Post.
Available from: https://www.huffingtonpost.co.uk/2014/10/31/facebook-buzzfeed-politics_n_6082312.html?guccounter=1

Galič, M., Timan, T. and Koops, B. (2017) Bentham, Deleuze and Beyond:
An Overview of Surveillance Theories from the Panopticon to Participation. Philosophy & Technology, Vol. 30 (№1), 9–37.

Gold, H. (2014) Facebook Data Mining for Political Views. Politico.
Available from: https://www.politico.com/blogs/media/2014/10/facebook-data-mining-for-political-views-197933

Hern, A. (2018) Cambridge Analytica: How Did it Turn Clicks Into Votes?
The Guardian. Available from: https://www.theguardian.com/news/2018/may/06/cambridge-analytica-how-turn-clicks-into-votes-christopher-wylie

Hitlin, P. and Rainie, L. (2019) Facebook Algorithms and Personal Data.
Pew Research Centre.
Available from: https://www.pewinternet.org/2019/01/16/facebook-algorithms-and-personal-data/

Huxley, A. (2007) Brave New World. 1st ed. London: Vintage.

Lyon, D. (2014) Surveillance, Snowden, and Big Data: Capacities, Consequences, Critique. Big Data & Society, Vol. 1 (№2), 1–13.

Lyon, D. (2003) Surveillance after September 11. 1st ed. Oxford: Polity Press.

Macaskill, E. and Dance, G. (2013) NSA Files: Decoded. The Guardian. Available from: https://www.theguardian.com/world/interactive/2013/nov/01/snowden-nsa-files-surveillance-revelations-decoded#section/1

McEleny, C. (2015) European Commission Issues €3.6m Grant for Tech that Measures Content ‘Likeability’. Campaign. Available from: https://www.campaignlive.co.uk/article/european-commission-issues-€36m-grant-tech-measures-content-likeability/1343366

McKinlay, A. and Starkey, K. (1998) Foucault, Management and Organization Theory from Panopticon to Technologies of Self. 1st ed. London: SAGE.

Mitchell, A., Matsa, K.E., Gottfried, J. and Kiley, J. (2014) Political Polarization and Media Habits: Section 2: Social Media, Political News and Ideology. Pew Research Centre. Available from: https://www.journalism.org/2014/10/21/section-2-social-media-political-news-and-ideology/

Nissenbaum, H. (2011) A Contextual Approach to Privacy Online.
Daedalus, Vol. 140 (№4), 32–48.

Orwell, G. (2013) 1984. 1st ed. London: Secker & Warburg.

Oxford Dictionaries (2019) Big Data Definition. Oxford Dictionaries. Available from: https://en.oxforddictionaries.com/definition/big_data

Realeyes (2019a) Realeyes: Emotional Intelligence. Realeyes.
Available from: https://www.realeyesit.com

Realeyes (2019b) Realeyes White Paper: Emotion Measurement.
Realeyes. Available from: https://www.realeyesit.com/Media/Default/Whitepaper/Realeyes_Whitepaper.pdf

Regalado, A. (2018) EXCLUSIVE: Chinese Scientists are Creating CRISPR Babies. MIT Technology Review.
Available from: https://www.technologyreview.com/s/612458/exclusive-chinese-scientists-are-creating-crispr-babies/

Savage, C. and Lichtblau, E. (2016) Classified 2002 Letter on N.S.A Eavesdropping is Made Public. New York Times. Available from: https://www.nytimes.com/2016/03/01/us/politics/classified-2002-letter-on-nsa-eavesdropping-is-made-public.html

Shachtman, N. (2010) Exclusive: Google, CIA Invest in ‘Future’ of Web Monitoring. Wired.
Available from: https://www.wired.com/2010/07/exclusive-google-cia/

Statista (2019) Number of Monthly Active Facebook Users Worldwide as of 4th Quarter 2018 (in millions). Statista. Available from: https://www.statista.com/statistics/264810/number-of-monthly-active-facebook-users-worldwide/

Varian, H. (2014) Beyond Big Data.
Business Economics, Vol. 49 (№1), 27–31.

Zittrain, J. (2014) Facebook Could Decide an Election Without Anyone Ever Finding Out. New Statesman. Available from: https://www.newstatesman.com/politics/2014/06/facebook-could-decide-election-without-anyone-ever-finding-out

Zuboff, S. (2019) The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. 1st ed. USA: Public Affairs.

Zuboff, S. (2015) Big Other: Surveillance Capitalism and the Prospects of an Information Civilization. Journal of Information Technology, Vol. 30 (№1), 75–89.

Zuboff, S. (1989) In the Age of the Smart Machine: The Future of Work and Power. 1st ed. Oxford: Heinemann Professional.

--

--

Design Narratives

Postgraduate Designer & Researcher. A critical reflection on the implications of what we design.