The Atomic Human : Understanding Ourselves in the Age of AI, By Neil Lawrence
Review by John Edwards (UK Information Commissioner)
One of the challenges of the recent discourse on artificial intelligence is that we have lacked a shared understanding of the technologies that are grouped together under that term. I often see a simple algorithm that applies a fixed set of criteria to sort a set of data as artificial intelligence. Those applications can reduce administrative burdens, and done badly can promote discriminatory decision making. We’ve had autocorrect in our phones and word documents for years — are they AI? What about the map that will give us the walking, cycling or public transport directions to our destination, calibrated for the real time traffic conditions? The voice recognition that guides you through a phone menu?
Of course now the discourse has pivoted around to generative AI, with image generation, text and video available to all. And public policy prognostications have moved on to apocalyptic visions of artificial general intelligence, or frontier AI.
How to make sense of all this? In his recent book, Neil Lawrence has some ideas.
The first is to question the concept of intelligence. What is intelligence? Most of us would respond along the lines of “an ability to think through a problem”. It involves thinking, collecting, anaylysing and understanding data. Not always. A reflexive reaction in a moment of crisis (Lawrence uses the example of a cycle crash he witnessed) can direct the body in an instant, without thought or deliberation or analysis. A reflective decision, made at greater leisure, builds on experience, the available data, and interpolations and extrapolations. Even the most thoughtful chess grand master or Go player, does not think through all the possible permutations of a given move. To do so would cripple them into inaction. Human intelligence takes shortcuts which machines have and continue to struggle with.
But if intelligence is something that allows a human to respond to stimulus and take action, why would we limit our understanding of it to cognitive neural concepts? The immune system, Lawrence suggests, is a form of intelligence in which the body acts through bio-chemical signals to direct a goldilocks level of immune response, mobilising enough white blood cells and cytokines to neutralise a threat, but not so much that it begins to attack healthy tissue.
We associate the ability to transmit and act on data with intelligence. Our DNA has passed on data intergenerationally throughout the evolution of our species.
In The Atomic Human, Lawrence seeks to identify the irreducible human qualities of intelligence. We have long used tools. AI is another tool. In beginning with the atomic human, rather than with the characteristics of the tool, Lawrence seeks to identify the limits of AI, what it can, or should, never be able to replace.
He proceeds by way of analogy, and reveals that that word describes one form of automated machinery, analogue decision makers. He explains communication theory through examples of world war two, and his grandfather’s role in the invasion of Normandy, there is the bicycle crash, the moon landing, a drug trial which overstimulates the subjects’ immune system and causes multiple organ failure. And Winnie the Pooh has no small part.
Analogue systems copy from models in nature to illustrate a system or control a mechanism. Money flows through the economy depending on interest rates, inflation, exchange rates and a variety of other factors which can be modelled by controlling valves altering the flow of water through tanks and pipes of a machine called the Moniac. There is one in the Reserve Bank in my home town in New Zealand. Anti-aircraft firing mechanisms can be linked to radar in the same way that a bat locates and captures its prey.
Lawrence patiently, but without condescension leads us through the many intellectual and social linkages that bring us to 2024, and strips away hype and hyperbole to soberly assess the challenges and opportunities of AI. From Newton to Wittgenstein to Bernard Shaw to Turing we see the development of the scientific method meet logic and philosophy to culminate in computers that have consumed the sum of human knowledge, but are not yet able to display the versatility of human intelligence.
AI did not arrive with ChatGPT, and we are not on the verge of being overrun by Terminator-like robots.
The fuel of AI is data, and Lawrence deftly identifies the strengths and shortcomings of our existing regulations and their ability to identify and protect against harmful applications of AI.
Do we only have ourselves to blame for surrendering our power and data to the machine? Lawrence seems to suggest so, in part:
“Part of how we control who we are is through choosing what to reveal about ourselves. By giving the machine access to so much information about us we are undermining our ability to exercise that control and making ourselves vulnerable to the machine” (p.27)
“Choosing what information we share and for what purpose is one of the ways we maintain control over who we are. In a world where it can be used to either collaborate or compete with us, it is understandable that we choose to be circumspect about who we share our personal information with.” (p.52)
But the situation is more complicated than that, and the power imbalances and information asymmetries too profound. In citing Shoshana Zuboff’s seminal work The Age of Surveillance Capitalism, and Daniel Kahneman’s guidebook on the shortcomings of human rationality Thinking Fast and Slow, Lawrence acknowledges that the cognitive blind spots Kahneman identifies have been exploited by the surveillance capitalists to manipulate consumers into surrendering the data that feeds the machine that creates the AI. To the point that the business models have moved from capturing and holding our attention to predicting and manipulating our intentions: restricting our world views — prejudging and serving us what we want before we even know we want it.
These manipulations and exploitations are real and present challenges, but they do not of themselves represent the kind of existential threats some commentators have envisioned:
“The problem with [the apocalyptic vision of an apex artificial intelligence manipulating and dominating us] … is that for those consequences to pan out in practice, it would require a precise and rankable definition of intelligence. The argument is based on an incompatible combination of precise and imprecise language. Through our social media the machine has already manipulated us, but I don’t think that makes it more intelligent than us …” (p.362)
As Information Commissioner whose responsibility it is to protect UK society from both sorts of harms, I found his assessment of the legal framework both insightful, and challenging:
“Dating back to the early 1980s legislation around personal data has sought to protect us from the abuse of such power but that regulation was originally designed to protect us from “consequential” decisions. Decisions about whether we should receive a loan, or medical treatment or what university were allowed to go to or what our insurance rate should be. That legislation goes by the unfortunate name of “data protection” but its intent is to protect people, not data and a better name for the legislation would be “personal data rights”.
Unfortunately these regulations don’t directly protect us regarding the “inconsequential” decisions that are made about us on a regular basis by social media platforms. Decisions about what posts to highlight in our social media news feed or what adverts to show us. This series of “inconsequential” decisions can accumulate to have a highly consequential effect on our lives and even our democracies. This is a new threat that isn’t directly addressed by existing data rights” (p. 364)
I’m not sure I agree with that final conclusion. It is true that data protection legislation has not been deployed in that way, but it does not necessarily follow that it cannot be.
This reflects a wider shift in societal understandings and expectations. Data rights were, and in many places still are seen as providing individual rights and remedies, but we are increasingly understanding that the threats are being visited upon the wider community. A recommender engine that delivers increasingly right wing material to a consumer already displaying a curiosity about that end of the political spectrum may not be experienced as harmful by that individual, but an increase by 1 in the number of nazis or violent extremists of any sort in the world is a net negative for society. We can see that personal data can be manipulated and applied to create division, and undermine democratic institutions and social cohesion, and so the principles of the law must be applied to mitigate and ameliorate those risks. And they can be. By holding the developers to account for their products, requiring that they fully examine and mitigate the potential risks. Privacy, and data protection are increasingly being understood as public goods as well as individual rights.
Lawrence goes on to propose collectivised data trusts as a mechanism for addressing these harms, and the concept is an interesting one, worthy of examination, to move us away from the feudalised system of what the law recognises as “data controllers” and “data subjects” (a term for which I share Lawrence’s scorn).
Where I and many of my colleague practitioners would agree with Lawrence is that we need to take urgent steps to hold developers to account.
Ultimately Lawrence is optimistic. The current developments in artificial intelligence offer great opportunities as tools to overcome some of our human limitations, without sacrificing our essential humanity. They neither ape nor can replace, our agency, or the irreducible essence of what it is to be human. The Atomic Human.
About the Author
John Edwards is the UK’s Information Commissioner. The Information Commissioner’s Office (ICO) is a non-departmental public body which reports directly to the Parliament of the United Kingdom and is sponsored by the Department for Science, Innovation and Technology. He previously worked as New Zealand Privacy Commissioner, and before that worked as a barrister.
***
This is the blog for Data & Policy (cambridge.org/dap), a peer-reviewed open access journal published by Cambridge University Press in association with the Data for Policy Conference and Community Interest Company.