Phil, I share many of your concerns, particularly the equivalence of AI and machine learning…
Peter Sweeney
11

An evolutionary paradigm for AGI?

Hi Peter, thanks for taking an interesting and responding. Amusingly, just as you sent your response on my post, I was forming a reply to one on yours!

From your excellent posts that I have read on your Medium page, I know that we are both approaching AI from a very different perspective — which is exactly what AI needs! In general, I think that a recent post clarifies my position to a large extent — which I would like you to offer your opinion on.

You are right, of course, that there have been considerable advances in AI. And I would not like to come across as belittling their achievements in their own field, but the over-extension of findings to extreme and general claims are open to heavy criticism from other fields. As an evolutionary biologist I am coming to AI with a different set of ‘what is important for intelligence’. Consequently, my criteria are often not met by advances that are hailed as a step toward AGI etc. I am familiar with your argument for prediction and learning, alongside similar arguments put forward by others, and I have no doubt that these principles will help to advance machine learning in the future. I also find it interesting that these arguments tend to be put forward by computer scientists and not by biologists.

As I implied in my reply to your post, biologists are very cautious of calling a behaviour ‘intelligent’ — as the bizarre history of behaviourism and the slow emergence of cognitive science attests. Intelligence is semantic. It is not a result but a method; take the example of children taking a maths tests, where a cheater is apparently equally competent but does not understand the concepts. In agreement with John Searle (at least in this specific), I can see no evidence that machines have developed a semantic understanding in order to qualify for the attribute ‘intelligence’. However, unlike Searle, I do not think this will remain the case.

I feel, your caricature about ‘evolving artificial humans’ is somewhat unfair. My article was forcefully trying to distance the idea of ‘human intelligence’ from today’s and tomorrow’s AI. My polemical focus on ‘rabbit-like intelligence’ was to try and get away from the idea that any kind of AGI would be human-like in the foreseeable future. Instead, I would rather that AI researchers learn from the strategies of real-world ‘natural general intelligences’ in the animal kingdom and beyond. But you are right that I see evolution as the correct paradigm, rather than intelligent design. Echoing the words of Dawkins (2004) and Dennett (1995, 2017), evolution can be considered an engineering paradigm — specifically of reverse-engineering. In this particular sense, and I don’t know if you would object to this (?), I see evolution as the route to intelligent design in AI, I don’t see it as an either/or case as your response suggests (if I have understood you correctly).

What I mean by this is very specific. I am not in favour of evolving hardware, but of evolving software. I was not imagining a literal process of recapitulating the evolution of rabbit-like form from a simpler ancestor. Instead I was meaning that ideas such as Neural Darwinism, or its forerunners and equivalents in evolutionary epistemology, are the kinds of ideas that I think will lead to AGI. Their current forms have their faults, but the central idea holds: you don’t want the machine to solve as general a problem as possible and natural selection is the tool for the job. Why? Because we know that it is fit for the job because it produced intelligent life over geological time, and I would also argue that it produces our minds in each generation (because the genome has harnessed the ‘natural selection algorithm’ to wire up neurons to encode the mind).

As such, and as I mentioned in my reply to your excellent article, to a biologist the achievements in prediction and learning are rather hollow. (This is not to say that they will not give rise to highly useful technologies, but I would not consider the them to be ‘on the road to AGI’. In that sense, I do not see the techniques as contributing to the precise methods or theory that will be used generate the first AGI except as part of the graveyard of ‘almost there’ ideas. I think there is no shame in this, as is attested by the vast leaps that computer science has experienced from de novo developments. To make room for new progress, old ideas — no matter how successful — are culled.) The reason that many of these developments are hollow to a biologist is because much of nature is thoroughly unintelligent, such as the famous case of the sphecid wasp mentioned here. In this way, it seems apparent to a biologist that higher cognitive functions like prediction and learning are not necessary to develop competent machines. And further, although I don’t like this it seems that most AI researchers equate intelligence with competence. Consequently, the goal of AI to develop machines that can behave in a generally competent/intelligent way in the real-world does not demand such cognitive functions. That is not to say that they are not useful, but they are not biomimetic. I would strongly argue that there is a reason (like efficiency) why evolution has followed specific methods to ‘design’ competent behaviour. And I think we would do well to learn from biology. Therefore, I do no see the problem as how far prediction and learning can be extended, because the sense in which computers can predict and learn is so narrow compared to biology that they are fundamentally using different approaches.

I apologise for the mammoth response, but would be delighted to continue this conversation. I have stated my argument here in a way that may come across as aggressive, which I hope it doesn’t. I have tried to be as clear as possible to give you things to ‘dig your teeth into’. I am sure there is much to learn from interdisciplinary approaches to AI. All the best!