AI: Cart Before the Horse. It’s what the brain does, not how fast it does it.

Jim Burrows
Personified Systems

--

I’ve read a couple of stories in the last few days addressing Artificial General Intelligence (AGI, the current name for human-like AI), that start out by looking at the power or speed of the latest huge machines. I think this is way off the mark.

For instance, Pawel Sysiak starts off “The Road to Artificial General Intelligence: Building a Computer as Smart as Humans”, with the statement:

If an AI system is going to be as intelligent as the human brain, one crucial thing has to happen — AI “needs to equal the brain’s raw computing capacity. One way to express this capacity is in the total calculations per second the brain could manage.”

He then throws around numbers like 10¹⁶ calculations per second (CPS), a term that he stresses is key to understanding what follows. The number, of course, was made famous 15 years or so ago in Ray Kurzweil’s “The Law of Accelerating Returns Applied to the Growth of Computation”. Kurzweil derives this number by multiplying the number of neurons (10¹¹) in the human brain by (10³) synapses per neuron times 200 CPS. This basically assumes that what brains and computers do is equivalent.

Ever since UNIVAC was first announced, we have talked about computers as “electronic brains”. Whenever new technology is created, we are faced with finding terms and especially metaphors to explain it. Cars are “horseless carriages” and their engines measured in “horse power” not because internal combustion engines and horses work in the same way, but because it was a decent metaphor for how each was used at the time. Predicting how soon a car will be able to return itself to the barn when you are done with it, based on the comparisons of horses and engines is not particularly useful. It overstretches the metaphor.

While that is not an entirely fair comparison, we shouldn’t lose track of just how different brains and computers are. Computers are generally digital, serial, clocked devices designed by people, whereas brains are analog, parallel, continuous devices, which we don’t fully understand. Some experts even hold that there are reasons to believe that brain neurophysiology has a major quantum component. That’s neither universally accepted nor rejected which illustrates how profoundly lacking our understanding is.

Using terms like “computations per second” to measure the behavior of an unclocked, continuous, parallel analog device with possible quantum connections is premature.

Another measure of the degree to which we don’t understand AGI, as opposed to narrow or specialized AI, can be seen if we look back 30 years. 1986 marked two interesting developments. First, it was the year in which Rumelhart, Hinton and Williams published their work on the back propagation of errors as a method of machine learning (ML) in simulated neural networks. Second, it was the year in which the connectome, consisting of 302 neurons and 7600 synapses of the hermaphrodite nematode C. elegans was described.

Today, we have a better description of the C. elegans connectome, and in fact every cell in its physiology, and a richer understanding of neural nets and ML techniques, but still, given that, in 30 years and with a complete physical description of the C. Elegans’ physiology at the cellular level, have not yet been able to create a full simulation of its behavior. There is a model that does a very credible job of imitating the motions of the nematode swimming in a straight line, but we are still well short of the abilities of the creature.

In current usage, Artificial General Intelligence differs from narrow or specialized AI in that it applies to a system that can replicate the full planoply of human cognitive functions. The nematode has about as limited a range of intelligence as it is possible for an animal to have. It can recognize and search for food, find a mate and reproduce, recognize and flee danger, and a number of other very elementary functions. It does this all in a 302-neuron nervous system, using a 95-cell musculature. If we call this 1CE of intelligence, how many CEs of intelligence does the average human possess and how much harder is it to build a human level of general intelligence than a 1CE system? We don’t know.

What we do know, thanks to Kurzweil, is that the “hardware” that is used to implement human level intelligence consists of 10¹¹ neurons of 10³ synapses each on average, or about 10¹⁴ total synapses. Since those numbers have only 1 significant digit of accuracy, let us call C. Elegans 7600 synapses 10⁴, and say that the human brain thus has roughly 10¹⁰ as many connections.

Now, we may be on the verge of developing a nematode-scale AGI in the next couple of years, and when we do, we may be able to figure out how much harder it is to create a true human-scale AGI, but the most important obstacle is not increasing the power of our largest computers or neural nets by a couple of orders of magnitude. The big hump is creating that 1CE level general intelligence. We can then begin to scale that up and see how it scales.

Thinking that the brain works just like a computer is not a new thing. Two centuries ago, the common view was that the world worked like a giant clockwork, a machine. This notion was reinforced by things like Jaquet-Droz’s three automata, especially the Writer. Unsurprisingly, our fiction soon was populated by automatons and mechanical men, such as the “Steam Man of the Prairies” and hoaxes such as the chess-playing automaton known as The Turk.

It is patently clear that despite the fact that The Writer could be programmed to write any text that fit in 40 letters and symbols, it wasn’t taught to write those words, didn’t understand them, and was in no way the equivalent of a person or intelligent being. The Steam Man was an impossible fiction, for all that steampunk is still a thriving genre, and the Turk and other 19th century mechanical men had to be hoaxes. Gears and steam engines are not the sort of things that understand, that have intelligence.

Similarly, digital computers, von Neuman- and Turing-machines do not work the same way as brains, and calling them “electronic brains” is a misnomer. That doesn’t mean that they cannot be used to emulate the way that brains do work, or that machines based upon modern computer technology cannot be made to work more the way that brains do. Still, in order to do that, we have to learn the details of how brains do work, and explore those methods and their artificial equivalents.

Deep Learning (DL) techniques, especially unsupervised DL, and what Monica Anderson at Sensai calls “Model-Free Methods”, holistic methods that leave it to the AIs to create their own models, are at the very least a start on building systems that are closer to the cognitive methods of humans and other animals that exhibit intelligence.

The fact is that the easiest way to describe the behavior of AlphaGo playing the game of Go at competetive professional levels is to say that it starts by forming opinions as to what moves would most likely be made by an experienced player based on its understanding of the patterns that emerged when it observed millions of games. It then “reads out the board” for each of the most likely moves, playing out all the most likely responses for several moves ahead, making a judgement as to the value of each board based upon its similarities to winning or losing positions in its experience of not only observing millions of games, but playing millions of games against itself.

Are its “opinions”, “judgements”, “experience” and ability to match patterns truly the equivalent of their human counterparts? Probably not, yet. However, we are getting to the point where these are the best analogues we have, and such a description is both understandable by a Go player and an AI developer or researcher, and not obviously wrong. AlphaGo is, of course, not an AGI. It is a specialized system that attempts to artificially replicate the skill of a professional Go player. Both the fact that it won and the way that it played suggest that it is artificially skilled at one task.

It still remains an unsolved task to produce a true AGI by applying the successful techniques that allowed developers to create an artificially skilled Go player with artificial opinions and judgements based upon artificial learning from observation and laying out hypotheticals.

If I were to try to do that, one possible approach that I would consider would be to create an artificial scientist, a specialized artificially skilled system whose task was not to win on a 361-position, three state board, but at setting the weights and thresholds of a 302-neuron, 7600-synapse connectome, matching the artificial worm to observations of the behaviors of a very great many actual worms. Skilled human experts have replicated one special case behavior. Based upon that, and the sort of Deep Learning and reinforcement learning that powered AlphaGo, perhaps augmented by genetic programing techniques, mightn’t we be able to created a skilled enough artificial connectome configurer to create our 1CE AGI? If so, then we will be able to talk meaningfully about scaling.

--

--

Jim Burrows
Personified Systems

On the ‘net (the ARPAnet) in ’74. 4 decades career doing hi-tech things I never did before. Researched Machine Ethics. Retired to create novels and comic books.