Angels and other superior beings

Aidan Ward
GentlySerious
Published in
9 min readAug 14, 2018

Aidan Ward and Philip Hellyer

How would we know if someone or something was more intelligent than we are? Classically, if we met an angel, how would we know they were angelic? What could it possibly mean to say that an artificial intelligence system was more intelligent than we are? I have been having these conversations with a young German techie, Paul. [1]

Of course, there is a huge problem with the way we use the term intelligence, and what we could possibly mean by it. Paul’s favoured example was the function in YouTube that can put subtitles on a clip in real time, even from lip movements. If YouTube can do this, and if we project that sort of capability forward 20 years,[2] does that represent intelligence? Given that it is (and will be) more than almost any human can manage, why do we put it on a pedestal by calling it intelligent rather than useful?

One of the symptoms that bedevils work on systems is that the systems practitioners/experts (including myself, a bit, on a bad day) think that they have a systemic view, of which their interlocutors just have partial subsets. i.e. that only they can see the big picture and are able to locate the views of other within it. I had a conversation in a SCiO meeting where I explained painstakingly to Dave Mettam how and why I disliked any classification scheme for people, anything that put them in pigeonholes. “Ah!”, he said in all seriousness, “that makes you one of those”. Hopeless.

We need to grasp firmly that this is what is happening. Say I meet an angel and I can see that he is more intelligent than I am. That is, I have an observer position that is able to assess the relative intelligence of the angel and myself. I hope you can see that, even if the angel outperforms me in every conceivable mental task, I have reserved the judgment that it is I who knows what is going on, who can judge what constitutes intelligence and what does not.

Let me take one more example that Paul produced. A medical diagnostic study showed, apparently, that human diagnosticians used three or four criteria to make a judgement before changing their logical stance and using subsequent criteria to confirm but not deny their initial judgement. By way of contrast, the competing artificial intelligence system used the many criteria more even-handedly, not subject to confirmation bias. Does that make the AI system more intelligent? My heart sinks because that is merely to erect confirmation bias (whatever that actually is) as a blemish on pure logical procedure and label the absence of confirmation bias as intelligence. Why? Even if the outcome of the whole enclosing system is a better recovery rate or whatever, I want to know what value we are putting on what and why we want to do that.

The autonomous vehicle debate (and indeed, lack of debate) is becoming paradigmatic. There are people who say that statistically, autonomous vehicle systems make fewer mistakes than human drivers, and they may well have numbers on their side.[3] But we don’t have, and may never have, a traffic system composed of 100% autonomous vehicles: it is always mixed.[4] And what counts as a mistake is a social judgment not a technical one about rules. This is not simply about complexity: human drivers will recognise AVs on the road and will feel in that encompassing way that they know how they will behave. They will “know” on that basis what aggressive manoeuvres they can get away with. This is a meta-learning system, a symmathesy if you like, where the learning modes are mixed. Who is right? What counts as a mistake or a fault?[5]

This argument mirrors our blog about boundaries. It may look as if the cars in a traffic system are usefully thought of as autonomous and independent but a moment’s thought shows that they are not. The questions we are pointing to involve what I think other drivers and AVs will do, and of course my behaviour as a result of my predictions affects directly what they will predict and do too.

Angels

There are beings, perhaps Christine Lagarde is an example, who impress by force of intelligence and personality. Because more people defer to their judgment than do to less-gifted others, they have a tendency to look as if they are in the right. This system is a self-reinforcing runaway system that can accelerate until she is right most of the time. (Until she’s very very wrong!)[6] That doesn’t make her an angel, though we might choose to call her intelligent, as I just did. It is a term with a natural meaning in that context. Looking at Christine Lagarde interacting with her fellows does not limit the breadth of what intelligence might mean as we think of technical skill, logical adroitness, moral force, judgment of human foibles, or whatever.

Intelligence has to be whole, has to address the entire situation as it emerges. Its opposite is blindness and wilful blindness, an inability to deal with something that has been boundaried off. It is the externalities of the economist that can be the whole meal, all the important stuff that would otherwise disturb the neatness of the analysis.[7]

So, an angel comprehends more than we could possibly comprehend. Angels see more of the small arcs of larger circles, maybe even the larger circles themselves. They comprehend things that don’t even come into our awareness to be intelligent about.

Venkatesh Rao says that in an argument between someone with a more nuanced understanding of a situation and someone less intelligent, the less intelligent person tends to win, because their approach with be blunter and more aggressive. So, if we debate with an angel we will not even notice their superior grasp. This leads to what I call the Columbo syndrome where making oneself appear stupid is a highly effective strategy.[8]

Artificial and other intelligences

Meeting an AI system is like meeting an angel. If it truly has intelligence we wouldn’t know. How could we possibly generate a vantage point from which to say that another system or being was more intelligent that we were? The very possibility of such intelligence has to include the likelihood that we are being duped. We cannot assess intelligence greater than our won, or even recognise it most of the time.

Remember the Ashby requisite variety theorem. For any systems being in control means that it must have enough variety of responses. There is a close connection between having an appropriate response for every different occurrence in the environment and being intelligent. Or put the other way round, to respond the same way to different situations is probably stupid and ineffective. This connection is close to the way we intuit intelligence in another being, and is why we cannot see what is going on when that being responds to things we do not ourselves understand.

Suppose an AI system responds to a situation in a way we do not understand and cannot even see how we might understand. What then? Perhaps the system is being stupid. Perhaps we are being stupid. Perhaps the meaning of intelligent response has just become moot.

If you want simply to experience what it feels like to be out of your depth in trying to understand something that challenges your view of the world, you could do worse than read Carlo Rovelli’s The Order of Time. You will discover for instance that our sense of time depends principally on the blurriness of our vision: if we could observe more precisely we would see that time does not have a natural direction of flow. And we would no that there is no such thing as the present time, except locally. Events in the universe are partially ordered, meaning that while it is possible to say that some events happened before some others, there are many events that cannot be ordered in this way. I like this last as a metaphor for many aspects of our thought: we think we can order things that cannot be given order.[9]

What does that feel like? Do you want to argue from common sense that these statements cannot be true? Can you understand that to do so would be to assume that you have a vantage point from which to take a view on Carlo Rovelli’s intelligence and indeed on a whole cohort of theoretical physicists and their physics? Is that not what we do all the time?

The main casualty for me in rearranging my thinking on this front is the same notion of cause and effect that is challenged by Gregory Bateson describing small arcs of larger circles. We easily become enamoured of apparent cause and effect because of the control it gives us in a rather technological way. To understand that it is not real whether mediated by our own desires or by the wonders of AI now or in twenty years is a very salutary liberation.

Smart animals

Franz de Waal has a book called Are we smart enough to know how smart animals are. We can take another stab at our question by looking at how we humans deal with assessing the rather different intelligence of other members of the animal world. You can guess the thesis of the book and it is very well argued. We have trouble understanding that there are many many things that animals can do that we can’t do and often can’t even see. As an example of this last point, whales were once held to be largely solitary animals, cruising the oceans: the reality is more that pods of related animals may be separated by 20 or 40 miles from each other but are in tight and constant communication with each other.

If we fail to recognise that animals that we have lived alongside forever have developed solutions and ways of living that are in some ways more intelligent that ourselves, what chance do we have with angels or advanced AI systems? If we can’t (any longer?) extend our intelligence using animal friends that we evolved with, what is the chance that we can use AI systems in this way? Where is the evidence?

In a strange conjunction, I met with the wonderful Ragnar Behncke, who describes himself as a Chilean Viking. He knows de Waal and with his sister, Isabel Behncke, made the first film of troops of bonobo solving social problems by having indiscriminate sex with each other. Ragnar has developed a technology that allows “Google glass” to display data from facial analysis about the emotional state of a person you are looking at. His first trials of the effects of such a facility we with primary school kids, who though the whole thing hilarious and were considerably enlivened by it. I give this example to show that enhanced intelligence and liveliness and humanity via AI are possible. But that is not where we are heading and not where we will get to.

Angels are more significant to us than we realise. Primary school kids are closer to angel intelligence than we are. Pondering on how we would recognise an angel when we met her is a worthy meditation.

[1] …while away on our Wild Routes project that gets people back in touch with nature; there is some irony there that we will explore!

[2] Why is it always twenty years? Maybe that gets us beyond the messiness of the present without becoming impossibly far off? Businesses seem to have five year plans and three-year ROI calculations, though some of them do their planning every five years rather than with a rolling forecast…

[3] Ironically, of course, in order to make autonomous cars safe for their occupants we would have to sacrifice the ideal of safety for the non-occupants… a little thought will reveal why!

[4] There’s a scale of autonomous vehicle environments, from level 1 to level 6. We ain’t there yet. Jean-Louis Gassée on the fallacy: https://mondaynote.com/autonomous-cars-the-level-5-fallacy-247ae9614e14

[5] There will be plenty of second order effects from self-driving cars. The insurance industry might be transformed. Ben Evans notes several other consequences: https://www.ben-evans.com/benedictevans/2017/3/20/cars-and-second-order-consequences

[6] One of the architects in my course on systems thinking and story telling for techies drew a series of causal loops that told this story, one of being smart and right, of being looked up to and unquestioned, of becoming increasingly out of touch and ill-informed, until an epic wrongness occurred. The relative magnitudes are important. His summary: the righter you are, the wronger you will be.

[7] One of Nassim Taleb’s criticisms of many sciences and pseudo-sciences is that they misunderstand real-world probability, often applying Gaussian ‘normal’ curves to individual situations in which unobserved low probability events will dominate the outcome, often by stopping the sequence. His go-to example: a casino-gambler can only follow a ‘strategy’ until bankrupt, and then stops playing; the averages don’t apply to individuals. Bruce Schneier also has a book, Liars and Outliers

[8] The more advanced person necessarily takes a more nuanced and subtle approach, even when seemingly acting with bluntness or ignorance. When you get blindsided by an unexpected behaviour, maybe your counterpart is coming up from below, or maybe they’re masterfully reorienting the situation in a Boydian OODA sort of way. A more sobering thought is to consider how it is that people come to develop a nuanced awareness; often they’re at the wrong end of a power gradient and really do need to notice the subtle shifts in, say, an abusive partner/parent’s mood and behaviour.

[9] Our obsession with ordering surfaces in our software; lists, outliners, project-planners, and so on. Even mind maps impose a hierarchy that may be premature and unhelpful.

--

--

Aidan Ward
GentlySerious

Smallholder rapidly learning about the way the world works