Alchemical Intelligence.
Why today’s AI-adherents sound just about as crazy as alchemists, and what it might really take to create intelligence.
A lot of “very serious people” are very seriously concerned about Artificial Intelligence.
Concerned that with the rise in computing power, machines will one day become not just “intelligent”, but “super-intelligent”. Which could pose a significant threat to us — a super-intelligent computer might not like its human creators all that much.
So concerned that they’ve penned an open letter calling for research to ensure “AI systems are robust and beneficial: our AI systems must do what we want them to do.” I.e. we’d rather they didn’t take over the world and kill us all.
That does sound quite scary.
So how worried should we really be?
Well, according to the people who actually do AI research, not very.
Why the gap?
A little historical perspective might be helpful.
A kind of magic.
Before the Scientific Revolution, many wise and serious people believed that things were made of some sort of underlying substance, which had a number of attributes attached to it.
Which has an initial plausibility, as it’s kind of the way we speak.
Gold is yellow, fusible, inert, malleable, etc. Ask what gold is aside from its yellowness, fusbility, inertness and malleability, and you say, “Oh, I don’t know — it’s stuff — you know — ‘substance’, the attributes adhere in that”.
Which gave rise to a then eminently plausible pursuit of alchemy — if you could get hold of just the right combination of attributes and swap them around on some substance, you could take some base metals which were yellow, malleable, inert and fusible, mush them together, smash them about a bit to transfer the attributes, and create some bona fide gold.
Hey presto, you’re rich!
And while you’re doing all this mushing and smushing, you can think about how awesome life will be when you’re rich and maybe catch up with some other alchemical buddies and speculate what it will be like in this new world of alchemical plenty — How rich will we be? How much really is too much? What if we could make other stuff too? Maybe even creatures? C’mon — it will be amazing!
And people tried hard to make gold.
So seriously was this taken, that from 1404 to 1689, there was a law in England against “multiplication”. Not the mathematical operation, you understand, but the “multiplication” of gold through alchemy, as many “very serious people” thought that such a capability would significantly destabilise the prevailing economic and social order.
You generally don’t go to the trouble of passing a law against something that you think is impossible.
Nor would you go to the trouble of trying to repeal the law as Robert Boyle did for most of his working life. “Boyle’s Law” Robert Boyle; that guy. Indeed, it was at his insistence that the law was finally taken off the statue-books at the end of the seventeenth century.
Why was he so adamant that people be allowed to “multiply” gold? When Boyle died, John Locke (England’s most influential philosopher) was an executor of the will. Amongst the many manuscripts Boyle left behind, Locke found Boyle’s recipe for the production of gold, and immediately wrote to Isaac Newton (yes, the Issac Newton — himself a practicing alchemist) to communicate the news, and set about acquiring the materials to carry out the experiment. Boyle evidently thought he’d discovered how to transmute base metals into gold.
Which seems a decent reason to make alchemy legal again.
These were about as heavyweight a set of intellectuals as you could find in any country in any era. And yet their ardent interest in alchemy seems laughable, not to say absurd, to us — because we now have a much better understanding of the fundamental structure of matter, and see their speculation and expectations as hopelessly ill-grounded and naïve.
Alchemists‘R’Us.
Right now, our quest for, and expectations of, AI are rather like those of the alchemists.
They tried to graft the attributes of gold onto a single substance.
AI advocates appear to be doing just the same — taking the attributes of “intelligence” — raw computational power, “recognising” faces, or mapping spaces, or processing language or spotting patterns — and hoping that if we smush them all together in a very powerful computer somehow it will magically add up to what we call “intelligence”.
But the alchemists were fundamentally mistaken about the structure of gold — the attributes were the consequence of the elemental composition, not its cause.
And I would contend that the same applies in this case — “intelligence” is not just a bit of “processing power” or “reasoning”, or “spotting patterns”, or processing language — those are its indications, consequences, and manifestations.
Define intelligence.
The real question AI-adherents need to answer is “what is the fundamental structure of intelligence?”
And that is a very hard question to answer. Indeed, I don’t think many people (or even practitioners) have given the answer too much more thought than “I’ll know it when I see it”.
It’s certainly not “narrow AI” — the completion of certain specific tasks, like playing chess, or even flying a plane — we already have machines that can do these things way better than humans — and we’re not that scared of (or even that impressed) by them. Don’t believe me? Ask Siri.
It’s the more “general AI”. The human ability to make decisions which Professor Linda Gottfredson has described as “a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly, and learn from experience”.
Which is very vague indeed, so she then goes on say “it is not merely book learning, a narrow academic skill, or test-taking smarts. Rather, it reflects a broader and deeper capability for comprehending our surroundings — ‘catching on,’ ‘making sense’ of things, or ‘figuring out’ what to do.” Which doesn’t really narrow things down that much more than the above mentioned knowing/seeing.
Check out the wikipedia entry — even those who spend their time studying intelligence can’t seem to agree on what it really is. You’ll have a damn hard time creating something when you’re not really sure what you’re creating.
Intelligence ostentate.
Well, if we can’t define it, let’s see if we can at least see it.
Let me ask you a question.
In front of you there is a kilogram of gold and a kilo of water. Which do you choose?
Your first reaction might be “Gold! — it’s worth loads!”
But then you probably thought, “Hold on, this obviously is a trick question. There must be something more going on here than meets the eye — maybe water?”
And then you thought, hold on another minute — “I know nothing about this gold or this water. Where is it and where am I, and why?”
And then I said, “You’re in a room made of gold, in a world made of gold”. And you think, “Oh, maybe not so interested in gold now — if there’s lots of it”.
And then I said, “And you haven’t had anything to eat or drink for four days”, and it’s not even a question, you couldn’t do really anything but choose the water — thirst would more or less compel you to take a drink.
Or I said, “Actually, your arm is on fire!” — straight for the water.
The “intelligent” choice is entirely contingent on the circumstances, and how the circumstances affect what you might perceive to be your best interests.
And I use the word “perceive” deliberately — if you were on fire, or hadn’t drunk anything for days, there wouldn’t be any thinking or deliberation or consideration, you would just react in a way that served your interests.
I can just feel it.
As the perception of circumstances change, your response to the choices change, and what appears the “intelligent” choice in one circumstance, is completely different in another. But why is that? Why does your response change so radically in these different circumstances? After all — gold is gold, and water is water.
Well, because the perception of what serves our best interests changes as the circumstances change. Ordinarily, we see gold as a scarce, tradable commodity which, as a permanent store of value, can be very handy, and water as freely available, and therefore more or less worthless.
Unless, of course, we’re thirsty, or our arm is on fire, in which case other imperatives apply and we make different very choices.
But all of these choices are backed by our perception of the value of these things to us in these circumstances. If we were unable to attach any value to either of these things in any circumstance, we’d be hard pressed to make any choices, and completely incapable of what we’d call “intelligent” decisions.
And what serves our interests? Well, again, you don’t need to subscribe to the entire theory to grasp the point, but that chap Maslow did a reasonable job of outlining what humans perceive their needs to be — from the physiological needs for air, water, sleep, to the need for bodily security, shelter, access to resources, through to social needs for companionship, up to self-expression and actualisation.
Humans need things, and your perception of the world, and your appreciation and valuation of the things in it, are entirely contingent on your meeting these needs.
If you don’t have food or water, you see the world entirely through that lens — a hot dog and a glass of water are the most important things in the world to you, and you’ll trade next to anything for them because you value them so highly, because without them, you will cease to be you.
Once you’ve got the basics out of the way, you’ll be willing to trade off one need against another, undertaking some first, deferring others until later, investing time in relationships today, spending time fitness training tomorrow, studying for work the day after — each decision a careful calculation of what combination of actions will help you meet your multiple goals based on your perception of your needs and the circumstances at hand.
The Matrix.
And there’s no doubt about it, we’re pretty sophisticated when it comes to trading off our needs, planning against them, mitigating risk, marshalling resources, as individuals, groups, societies and even, sometimes, as a species.
We’ve got so good at this sort of thing that we don’t just see and respond to our immediate environment, we create immensely complex models of the world and hypothesise about the rules that underpin it, to create ever more abstract and ever more complex representations of the world which we can then manipulate to our ends.
We are rather like Neo at the end of the Matrix — our understanding of the world is now so powerful and sophisticated that we can model, predict and then directly manipulate our environment. We call the abstract models we use “science”, and that manipulation “technology” and “engineering”.
And when we find individuals who are particularly good at the type of things that help contribute towards these abilities, we call them “intelligent”.
Needs whence?
It’s those needs that drive your perception of the value of each choice, and that which drives what you decide to do in any given circumstance — each choice helps you meet an end, each end being determined by the fundamental needs you have to address.
So where do you get those needs from?
Well, that’s pretty easy.
We’re here to make more of us! We’re here to reproduce. And boy, as the product of millions of years of evolution, we’re damn good at it! There’s 7bn+ humans and counting.
That’s where our needs come from — they’ve been hardwired in us through our genes and millions of generations of “live, reproduce, die, repeat” evolution.
Everything that exhibits anything that remotely resembles what we call “intelligence” has been designed to reproduce — that is the ultimate end of all those creatures, and all their “intelligence”.
We attach value to choices because we have fundamental needs, and we have fundamental needs because we have been designed to reproduce. But how that need to reproduce creates those needs, and how those needs generate both the models of the world we create and the values that we fill those models with — we’ll that’s a damn hard question to answer.
Because it’s not just the totting up of points — we can’t say to a machine “collect as many food points as possible” and then expect the machine to kill someone for some food, because nothing really bad happens to the machine if it doesn’t make its points quota.
The machine doesn’t feel bad, it isn’t wracked with pain, it doesn’t go hungry.
Because it doesn’t need to, because the machine hasn’t been built as a system that exists solely to reproduce, and whose entire being is encoded with this imperative, whose blueprint was shaped by circumstance over an unimaginably long time to be the most effective possible reproduction system, one component of the success of which is to stay alive long enough to reproduce, which requires food, which requires that it be hungry on a reasonably regular basis, and then do something about it.
The machine has nothing at stake.
Even if the most powerful “point-counting” machine were capable of killing a man, or appropriating a resource, or replication, why would it?
It doesn’t fundamentally hunger for it in the way we do for our needs. We colloquially say that our iPhones need power, but they don’t need anything. If we want them to work we need them to have power, but that’s our need, not the iPhones.
iPhones don’t need anything, they don’t want anything.
The values we attach to outcomes, and the needs that drive those values, and that evolutionary mechanism — these are the fundamental underpinnings of the attributes we characterise as “intelligence”.
All these computer systems for perceiving faces, for viewing rooms, for processing language and creating responses — we are building all of the means, but without understanding that all the means can never add up to an end.
We are creating incredibly powerful systems that understand nothing, and can make no decisions because, essentially, they have no stake in the world — they have no needs, no ends, and so can never be “intelligent”.
Evolving intelligence.
So given all this, what might we say “intelligence” amounts to?
As a first stab, I might contend that “intelligence” is structurally dependent on having a stake in the world — having a reason for being which necessitates engineering and adaptation and the attribution of value to choices as the means to address that reason.
In our case, and in the case of all other “intelligent” creatures we’ve ever encountered, that reason for being is reproduction in the context of evolution in a competition for scarce resources against other creatures.
Which depends not at all on computational power in the first instance.
Let me illustrate with an example.
A cell wall is a stupendously successful evolutionary adaptation — a physical expression of the truth that the outside is unstable, the inside stable, and therefore more likely to be able to preserve itself and reproduce over time. The physical instantiation of this truth requires zero acts of “computation”.
And, in the second instance, a cell wall is a wholly embodied attribute, incapable of meaningful “emulation” through any sort of “computation”.
Now, you might say, that’s great for cell walls, but we’re talking about “intelligence” here — that’s not some physical attribute.
Well, first, “intelligence” obviously is a physical attribute — human intelligence is a property of humans, and humans are physical creatures. Unless you are going to revert to some extremely peculiar form of philosophical dualism — the notion that the mind and body are entirely separate substances, you have to concede that “intelligence” is grounded in physical objects.
(And if you do subscribe to some form of dualism, you are going to have a damn hard time creating “intelligence” — what have you to work with to produce it? Spiritual substance? Processing cycles? Pattern recognition? By itself? Really?)
Second, consider the fight and flight mechanism. This is an adaptive reaction that is largely pre-conscious — we don’t think about this, we just do it when a significant threat presents itself.
This reaction has clear physical causes and a clear physiological response which affects our perception of the world and drives the decisions we make about it. Given what we know about our physicality, our psychology, and our evolutionary history, on what ground could we assert this mechanism to be wholly separate from what we typically describe as “intelligence”? It works in the same basic way, using the same basic equipment, achieving the same basic ends.
It seems almost certain that, as a process of biological evolution, “intelligence” is equally intrinsically embodied, inseparable from the organism in which it has evolved and fundamentally “un-emulatable”.
Even if we could match or exceed the computational power of a human brain, that would no more make an intelligence than emulating its strength or height or perspiration in a computer would create those things. These attributes are embodied, physical things that require embodied physical instantiation.
The philosopher John Searle puts it nicely. We could create an immensely powerful computer simulation of the brain that matches what we might think of as the computational power required for human “intelligence”. Is that emulation “intelligent”? Suppose you create an equally accurate computer emulation of the stomach — you wouldn’t shove a piece of pizza into the disc drive and expect it to be digested.
Ditto an emulation of evolution to create “intelligence” through some simulation of biological means. Effectively emulating evolution is simply not the same as something actually, physically evolving.
Flight simulation software is incredibly realistic these days — but sitting in front of Flight Simulator 95 for 8 hours won’t get you from London to New York.
Moreover, our memory—our very sense of who we are as individuals — is intrinsically tied to our embodiment.
Recent research has shown that we have a sense of “‘self” only because we have a sense of the physical body within which the “self” is contained. And that our memory of ourselves is intrinsically related to our sense of physical and spatial orientation within the world. This “body image”— our sense of our physical presence in the world — is constituative of our consciousness and its consequent “intelligence”.
Without this sense of ourself in the world, there is no conscious individual intelligence. Quite how the adherents of AI plan to re-create this intrinsic sense of a “body in the world” without a body in the world is not at all clear.
The alcahest
There’s no doubt that machines will continue to make incredible leaps and bounds in their capabilities — they’ll be better able to map and navigate their environments, monitor and anticipate our needs, learn and translate languages, prove mathematical theorems, spot patterns in data to generate new hypotheses, and all sorts of other impressive feats of computation. But that is what they will be — impressive feats of computation.
Creating a system or machine without some end or other, which incentivises and drives the machine to help it make decisions, and to shape its environment to meet those ends, can never be what we typically think of as “intelligent”.
So, those who hype, boost and preach the coming of the AI super-intelligence are generally looking in the wrong places at the wrong problems, and will continue to be disappointed if they continue to do so.
Just like those alchemists of old.
Now that doesn’t mean there’s nothing to AI or AI research — after all, all that alchemical research eventually laid the foundation for the modern discipline of chemistry. And the achievements of “narrow AI” are pretty impressive — Self-driving cars! Instant translation! Unbeatable poker players! — and will only get more so.
But it does mean that people who want to create “general AI” need to do some mental house-cleaning before they’ll get anywhere.
E.g.:
- Don’t think about “simulating” intelligence, or expecting it to be a product of some abstract “computational power” or “pattern recognition”— “intelligence” is an embodied property of physical objects
- Ditto a “hardware / software” analogy from computing — “intelligence” affords no such distinction; you can’t “upload your mind” to anything, because your mind is inextricably, intrinsically woven into your body
- Equally, your body is woven into the world — so there’s likely no such thing as an “intelligence” fundamentally cut-off or separate from the world; if you can’t be in, be affected by, and in return manipulate, the world, you likely can’t make “intelligent” decisions about it
- Don’t think about instances, think about processes — “intelligence” doesn’t just exist by itself in a single thing, it is the product of evolution in a species over time
- Don’t think you can create “intelligence” absent some fundamental stake in the world — it’s only having a stake that helps us make what can be deemed “intelligent” decisions, and if history is any guide, that stake might have to be reproduction
The fact of the matter is, we’re at least one, and maybe several fundamental theoretical breakthroughs away from creating any substantive form of general Artificial Intelligence.
These aren’t the droids you’re looking for
So please all you AI-boosters, futurists, visionaries and associated hangers-on, please stop wasting your time (and ours) telling us how amazing, different, scary and exciting the world is going to be when we can transmute base metals into gold (“we’re all going to be rich!, rich I tell you!”), and spend a bit more time considering the fundamental structures that underpin your subject matter to see if what you’re so worried about is even possible, let alone probable, never mind imminent.
Then you might, at last, have words worth their weight.