The Essential Humanity of AI

We can only approach AI in human terms, and see in it our own reflection

Paul Siemers, PhD
Brain Labs
8 min readJun 10, 2024

--

Robotic woman sees herself reflected as a child
Created using NightCafé

Introduction

Artificial intelligence is a matter of growing international concern. For example, a 2023 survey found, “In nearly all countries surveyed, more than half of the respondents were worried about the risk of AI being used to carry out cyberattacks, AI being used to help design biological weapons, and humans losing control of AI.”

How should we respond to a threat of this magnitude? In a previous article, I argued that we lack a fundamental understanding of AI. This basic confusion thwarts our attempt to co-ordinate an adequate response. We are like knights confronting a shapeshifting wizard who baffles and dazzles us until we stagger around disoriented.

This view may frustrate those who believe, if we can only peel away the hype and popular misconceptions, we can reach an objective understanding of AI — firm ground on which to take a stand. In this article, I will explore the objective, “scientific” account of AI and see whether it can indeed lead us to clarity.

An Objective Approach to AI

To reach an objective view of AI, we need to strip away anything dependent on individual perspectives or opinions. We need to get beyond both utopian dreams and apocalyptic nightmares. We must shed conceptions of AI shaped by centuries of cultural imaginings back to the Greek myth of Talos, a 30-foot high bronze automaton that guarded Europa, the Queen of Crete. In particular, we need to rid our understanding of AI of all vestiges of anthropomorphism — the deeply entrenched human tendency to read human attributes into non-human phenomena. Science has allowed us to relinquish our anthropomorphic conceptions of the weather and the planets — perhaps it can help us achieve the same thing for AI.

Starting with our current conception of AI and subtracting centuries of misconception would be lengthy and intricate. It appears much simpler to go back to objective fundamentals and move systematically forward from there. This way, we can build an understanding free from subjectivity and superstition. The first plank in this construction must be an objective understanding of intelligence itself.

Universal Intelligence

As noted in a previous article, there is no universal consensus about the nature of intelligence. For example, the Wikipedia article on intelligence refers to the many possible capacities involved in intelligence, including “abstraction, logic, understanding, self-awareness, learning, emotional knowledge, reasoning, planning, creativity, critical thinking, and problem-solving.”

There is, however, a strong contender for the most objective definition of intelligence: “Universal Intelligence” as defined by Shane Legg & Marcus Hutter (both of DeepMind).

Legg & Hutter started with a broad survey of well-known definitions of intelligence. From these, they distilled what they regard as the fundamental essence of intelligence:

“Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”

Intelligence, by this definition, requires only three elements:

  • An agent — something that is able to act
  • A range of environments — a set of external circumstances on which the agent can act
  • Goals — measures by which one can assess the success of the agent

Using this simple definition, Legg & Hutter derive a formula to measure the “Universal Intelligence” of an agent. Universal Intelligence enables the intelligence of any agents — human or non-human — to be calculated and compared.

Legg & Hutter use this formula to demonstrate some key factors contributing to Universal Intelligence. For example:

  • An agent that can learn and can exercise some degree of foresight will outperform one that cannot; and
  • An adaptable agent that performs well across many environments will do better than a specialised agent that excels in only a few environments.

As Legg & Hutter note, Universal Intelligence has several advantages as a definition. It is a formal measure with no room for interpretation. It captures the essence of what we generally define as “intelligence.” It is objective and unbiased. (Note: this assumes the goals can be measured in an objective and unbiased way — more on this below.) It can apply to any agent, however simple or complex. One could use it to compare the performance of a wide range of agents. These considerations make Universal Intelligence considerably better than less formal measures such as the oft-quoted Turing Test.

What makes Universal Intelligence interesting to us is Legg & Hutter’s claim that:

“Universal intelligence is in no way anthropocentric.”

Perhaps Universal Intelligence can provide the foundation for a truly objective understanding of AI, untainted by human stereotypes and preconceptions.

Robotic miner trades goods with a Native American
Created using NightCafé

GoldAI

Imagine an agent with a high Universal Intelligence. This agent achieves its goals across a wide range of environments. I imagine this as an agent that travels the galaxy’s planets, looking for gold. Let us call this agent GoldAI.

Dropped onto a random planet, GoldAI works remorselessly to harvest the planet’s gold in the most effective way. On some planets, GoldAI builds mining machinery and digs mines. On others, GoldAI builds relationships with the residents of the planet and accumulates gold by trade — or by war, if that is more effective. Where necessary, GoldAI contends with dragons that have accumulated planetary gold in glittering hoards. But always, GoldAI gets the gold using just the right mix — the most intelligent mix — of hard work, force, ingenuity and guile. Of course, not all highly intelligent agents pursue something as simple as gold. Others may pursue bauxite, knowledge or universal love — or some optimal mix of these three. But they all share some measure of GoldAI’s relentlessness and adaptability.

GoldAI is a simplistic but reasonably fair example of an agent with a high Universal Intelligence. It might be argued GoldAI is a monomaniac — that being good at achieving a single goal is not enough to constitute intelligence. We may think a truly intelligent agent should have higher goals — to fall in love, cure cancer or write poetry. However, Universal Intelligence makes no presumption about what an agent’s goals should be; it measures the agent’s effectiveness in achieving them. Thus, if we accept the definition of Universal Intelligence, GoldAI is highly intelligent. If so, is GoldAI an objective AI that makes no reference to human intelligence?

There are two significant problems with this claim:

Problem #1: The Human Perspective on AI

Firstly, the objective account of AI leaves vital questions unconsidered.

So, GoldAI is very good at finding gold on different planets. But is GoldAI conscious? Does GoldAI have emotions, feel pain or see colours? Does GoldAI think or only simulate thought?

The “objective” account of AI is silent on these questions. Indeed, being objective, it can only describe the objective behaviour of GoldAI, not its subjective, internal dimensions. Unless it can be proved that consciousness and emotion objectively impact GoldAI’s performance, consciousness and emotion are irrelevant to its Universal Intelligence.

However, from a human perspective, these are some of the most pressing and immediate concerns. That is why they are the recurrent questions in all literary and cinematic reflections on AI. Faced with GoldAI, we must be as interested in its capacity for feeling as in its capacity for harvesting gold. To be otherwise would reveal us to be as callous as slave owners; it would reveal us to be lacking in fundamental humanity.

For this reason, an objective account of AI cannot be regarded as complete. It ignores too many pressing human concerns. It is like a military history that leaves out any account of human suffering. Such a history may be convenient and useful for military purposes, but it is nevertheless deficient and potentially dangerous.

Problem #2: The Human Imprint on AI

Secondly, the objective account of AI retains, at its core, some fundamentally human elements.

Universal Intelligence relies on the idea of achieving goals. But where are these goals to come from? Presumably, humans will initially provide these goals. In this case, objective AI is founded on anthropocentric goals and so remains an essentially human project.

This objection may one day be overcome — in some remote future, an AI may choose its own goals wholly divorced from any human aspiration. However, even at this point, an intelligent agent (defined in terms of Universal Intelligence) must still retain the basic structure of utilising its environment to achieve its goals. But is this a truly universal definition of intelligence? This is hard to know because we have only human intelligence to compare it with. We can compare it with the intelligence of animals, but this is of limited value because we generally evaluate animal intelligence only by comparing it to human intelligence. Thus a comparison with animal intelligence is just an indirect comparison with human intelligence. We could be more confident if we could compare it with a representative set of alien intelligences.

Indeed, it is unclear whether it is even a comprehensive definition of human intelligence. The definition of Universal Intelligence assumes intelligence rests on the following:

  • An agent individually maximising its goal
  • Goals being objectively quantifiable
  • The environment being an input into the achievement of goals

GoldAI deliberately exemplifies the natural end-point of this approach to goal definition — a paragon of exploitative greed.

An equally plausible set of assumptions about intelligence would be:

  • Goals are shared and achieved collaboratively
  • Goals are qualitative and intangible
  • Goals must be a harmonious element of the environment

While these can be “parsed” into quantifiable and objective goals, it is hard to see how this can be done without losing something essential. For example, say our goal is the sustainable flourishing of a community. It is unclear whether this can be translated into a set of individual, numerical KPIs while retaining any authentic sense of the original goal.

Woman cradles a robotic baby
Created using NightCafé

Summary

Human interests and concerns can be messy and hard to grapple with. Philosophical complexities relating to consciousness, ethics, identity (and the like) are frustratingly opaque to technicians. Thus, there is the temptation to ignore these concerns and retreat onto the supposedly surer ground of scientific objectivity. On this familiar terrain, we can assume that anything important can be expressed mathematically.

Such a retreat would be a dangerous mistake. First, by trying to take an objective vantage point, we can lose sight of many of the most pressing human concerns. Second, even with our best effort at objectivity, human values are likely to remain deeply imprinted in AI. If we ignore this, we risk creating an AI that embodies a vicious set of values hidden behind a veneer of objectivity.

Treating the philosophical questions surrounding AI as a quaint sideshow is a huge gamble. We need to tackle these questions seriously and determinedly. If this requires reallocating resources from engineering to critical reflection, that is probably no bad thing.

Bring Your Words

--

--

Paul Siemers, PhD
Brain Labs

I am passionate about revealing how technology really works. I have 30+ years experience in technology strategy, and a PhD in Philosophy of Technology.