Let’s be honest about the achievements of AI

Artificial intelligence does not mimic natural intelligence, and it is not clear that there have been significant developments toward anything with rabbit-like intelligence, let alone human-like intelligence. If researchers want a frank and open discussion about the impact and regulation of AI, the first place to start is being honest about what has been achieved and what is likely to be achieved in the near-future.


Artificial intelligence (AI) is a term that drops in and out of fashion every couple of decades. Each new wave of excitement from the latest leap forward leads to wild extrapolations (see also), as the mathematical techniques expand and give rise to new applications. Throughout this process, AI has become a vacuous term that can mean anything from Bayesian statistics to the mechanics of walking. Whilst you may positively take this as an expansion of what ‘intelligence’ means from a narrow preconception of chess-playing academics, in reality this represents a narrowing of the focus in the meaning of ‘artificial’. We are currently going through a phase where almost any advance in computer technology is described as leap forward for AI.

A major problem for practitioners is that any new term to replace AI in some specific sense is readily incorporated into the public understanding as a synonym of AI. From computer science, machine learning — that is currently in vogue amongst practitioners — is a prime example of a recent integration. Consequently, those who are seeking automation of task using various aspects of computational technology undergo a chameleonic renaming, from AI to cognitive science to computational neurobiology to machine learning and so on, in a desperate attempt to avoid being misunderstood. But, in each generation, young scientists and engineers are inspired by public understanding of AI before they can reach a technical understanding of the differences between the hype and reality — and so the cycle perpetuates.

Going back to the start of the research that was explicitly labelled as AI, most researchers focused on computers that play games. But the kind of intelligence that a computer achieves whilst playing a game is specific to the particular game — i.e. there is competence without comprehension. Early chess-playing computer could be confused by non-standard openings or improbable moves because their ability to play is confined to a narrow set of expectations. If a living thing had competence that was so unbalanced we would not describe its behaviour as ‘intelligent’ because there is no demonstration of a choice of ‘strategy’. Instead the computer is highly sphexish — Hofstadter’s term in honour of the sphecid wasp that can only perform a task in entirety, without understanding how its goal can be adapted in different situations to yield multiple strategies (e.g. that permit shortcuts to be taken).

People have a very all-or-none mentality with respect to intelligence. Either you are intelligent or you are stupid and there is nothing in between. And yet, intelligence has gradually evolved, increasing or decreasing in complexity as different environments select. A sphecid wasp’s intelligence is not ‘inferior’ to a human’s intelligence, but rather it is tailored to a different set of problems. Creativity is costly and it doesn’t pay a sphecid wasp to see the world as a human being does. The wasp doesn’t see shortcuts because it does not pay to be able to see the shortcuts that appear obvious to us.

But flipping the metaphor on its head, if a sphecid wasp were to look at human behaviour it would laugh at the effort that we put into collecting shoes, sharing food and performing religious rites. To a sphecid wasp, these corollaries of social living are alien to its solitary existence. If we push this a little further back to reality, a sphecid wasp wouldn’t really laugh at human behaviour, because it doesn’t pay a sphecid wasp to have the creativity to understand what ‘human’ is: the wasp sees an environment and responds to it, it doesn’t see humans or ‘human shoes’. But the point is really more general: if we are just trying to replicate our genes, ‘human intelligence’ is just as circuitous and ‘stupid’ as ‘sphecid intelligence’.

But, regardless of their respective nonsensicalness, both humans and sphecid wasps are very different from AI. If AI is artificial intelligence, it is designed by humans in order to solve a particular task — not to solve a human’s task. For example, in a computer game, we want an AI to be fun to play against, we don’t want it to use superhuman abilities and be unbeatable. I would go so far as to say that, for the tasks that we have already put AIs to use in, we don’t want a human-level of intelligence. Daniel Dennett has gone so far as to say that, in general, we want ‘tools’ not ‘colleagues’.

There are many who look for artificial general intelligence that disagree with Dennett’s position. Demis Hassabis maintains that an artificial general intelligence could provide a meta-solution to any problem we could imagine, which is hitting the nail precisely on the head. Hassabis implicitly agrees with Dennett that we want tools not colleagues, but believes that human-type AI is possible without making this transition. People like Hofstadter have strongly argued that this transition is not possible, that any move to human-type understanding almost certainly requires a human-type ‘self’ or consciousness.

If we look back to natural design of intelligence, we can see Hofstadter’s belief played out in the evolution of complex intelligence. As animals have acquired greater intelligence, they have become more and more circuitous in their mission to replicate their genes. From corvids to chimpanzees, intelligent species’ genes are selected to maximise genetic replication, but in the process have opened up new avenues of evolution in real-time learning and extra-genetic modes of inheritance. I am not trying to build up a picture of a trait as broad or abstract as intelligence as being ‘maladaptive’, but it is true that the benefits of intelligence bring a host of costs that distract and mislead an agent away from the genetic competition that underpins their corporeal design.

Intelligence does not appear to be a straightforward process of ‘optimisation’. I think this is crucial to understanding the important difference between the goal of AI being tools or colleagues. I have no doubt that machine learning and neural networks will provide extraordinarily useful technologies that can enhance human capability, but they will always require a human user with sufficient expertise*. Software could no doubt be wrapped in a black box, but somewhere a human will have to understand the technology enough to be able to develop further capabilities. On the other hand, developing autonomous machines is giving an agent the means to pursue its own goals through its own methods. An important aspect of autonomy is sentience — the ability to have a ‘self’ rather than follow some set of ‘rules’ or ‘instincts’. There is no reason to suppose that a sentient machine would want to solve humanity’s problems, in the same way that many humans do not feel inclined to solve humanity’s problems. Whatever the maths underlying sentient mechanics, there is no expectation of optimisation at the level of the agent — in the same way that optimisation applies poorly to human behaviour.

Rabbits have no idea how their minds work, though my rabbit (Kevin) took a novel approach to learning by trying to absorb Steven Pinker’s ‘How The Mind Works’, in spite of the fact that she only managed to partially digest ‘The Language Instinct’ (when I wasn’t looking!).

From this discussion, I think we can separate out some important distinctions. AI may well be a meaningless term, but in the future I would still prefer that AI is reserved for software that is involved in a nonlinear feedback process, such as the bots that respond to their environment in many computer games. Machine learning (ML) is more inkeeping with statistical analysis of big data, which is a very different application. However, both AI and ML can both use techniques like neural network modelling, but there are enormous differences between an AI that has to find ways to respond to a dynamic environment and one that is purely analysing a static environments in a dataset. I think this split makes clear that artificial general intelligence fits into the framework of developing decision-making agents that solve problems rather than statistics to advise on some human’s decision-making. But I would strongly differentiate artificial general intelligence (AGI) from any kind of biomimetic intelligence. AGI is about solving problems, inkeeping with Dennett and Hassabis’ preference, but biomimetic intelligence is about developing agents that solve their own problems.

What would biomimetic intelligence look like? For a long time, biomimetic intelligence would be very sphexish and no doubt it would take decades if not centuries to reach the ‘stupidty’ of an animal as complex as, say, a rabbit. There is a huge challenge in biomimetic intelligence because we don’t really understand the practicalities of why ‘intelligence’ is a good strategy for genes to maximise their replication (Richard Dawkins’ describes this as God’s utility function) when it appear to place such a heavy burden on an agent’s ability to survive and reproduce. Consequently, we can design a machine with digital genes that try maximise their replication, but it is unclear as to whether we would evolve a machine that is sentient let alone intelligent.

I think nothing could be more interesting than developing biomimetic intelligence, but does industry have a use for biomimetic intelligence? The near-term answer is straightforward: no. Sure, one day, scientists may develop a sentient machine that can exhibit natural language processing and communicate with human beings, but there is going to be a wide trough between that extremely desirable end-result and capabilities of inchoate or transitional intelligences. We have to lower our expectations and focus on reverse-engineering something simpler like a rabbit. I say ‘simpler’, but for all rabbits’ apparent stupidity, machines are not even approaching their capacities to exist in the real world.

There was a really interesting article recently about the way in which the images of AI have changed over the years, and how they continue to mislead the general public in implying that human-type intelligence is just around the corner. Indeed, many studies aim at getting higher impact because of the way they appeal to this fanciful narrative. My last comment would be just to add my voice to the hoards of researchers who would wish that many studies tone down their assumption and speculation on how human intelligence is achieved, to educate the public as to what computers are really capable of. Consider a recent article publicising the work of Google DeepMind: ‘Computers are Starting to Reason like Humans’. Perhaps ‘Computers are Starting to Behave like Worms’ is more correct! We have no idea how humans reason, but we know how humans portray, in their behaviour and communication, their experience of reasoning. Worms seem like a more accurate comparison, given that they make decisions about their environment but have very limited self-awareness, much like a computer. It is not as catchy, but if AI is to take itself seriously then it needs to shed the ‘boom and bust’ cycles of interest and disinterest that have fed what can only be described as a widespread distrust of the AIs that people interact with, like Siri or Cortana. If researchers into AI are wanting an open and frank discussion with the public about how to regulate the powers that are given to machines, they need to start being honest about what has already been achieved.

Thanks for reading!