Uncovering How the Brain Performs Intelligence

/r/21dotco
6 min readDec 22, 2015

November 26th, 2007

Ray Kurzweil is an inventor, entrepreneur, author, and futurist. Called “the restless genius” by the Wall Street Journal and “the ultimate thinking machine” by Forbes he was inducted into the National Inventors Hall of Fame in 2002. He helped organize the Singularity Summit at Stanford University in 2006 and gave the keynote presentation exploring some of the central issues explored in his book The Singularity Is Near. At the 2007 Singularity Summit, he attended virtually, giving a brief talk before answering questions from the audience on how technologists are currently uncovering how the brain performs intelligence.

The following transcript of Ray Kurzweil’s Singularity Summit presentation has not been approved by the author.

Uncovering How the Brain Performs Intelligence

Thank you, Tyler. It’s a pleasure to be here with you, at least virtually. There are two issues that people get excited about, and some actually get upset about. One is radical life extension and the other is, really, radical life expansion, which ultimately will come from AI and our merging with artificial intelligence. Perhaps ironically there were major talks on both issues today, and I had the opportunity to present at Aubrey de Grey’s Strategies for Engineered Negligible Senescence conference this morning. That means getting to escape velocity, where we are adding more than a year every year to our remaining life expectancy. That’s not a guarantee of immortality, but it is at least a tipping point.

I would like just to make a few comments. I have been following the conference. I think I will respond mostly to Peter Norvigs comments, which I think were interesting. There are many different ways of looking at the future and analyzing trends. I think it’s important to point out that it is information technology that has a predictable trajectory. And it is not hard to find a lot of predictions that haven’t worked out well, particularly if they are not based on information technology, such as predictions made by novelists, for example. I’ve tried not to just look backwards. I’ve made forward-looking predictions in a book I wrote ten years ago, which I think have been holding up well. Norvig cites an analysis by Jonathan Huebner. That analysis commented on one of the graphs in my book. I don’t know if it’s one of the most important ones, but it’s an analysis of innovation. And I think Norvig’s point, that innovation is hard to define, is well taken. I took 14 different lists so as not to use any one list: the Encyclopedia Britannica, the Museum of Natural History, Carl Sagan’s Cosmic Calendar, both biological and psychological innovation. And you see very clear acceleration.

Huebner took a different list, questionable in my view, transformed the data in some questionable ways and came up with a different conclusion. I do think innovation is a hard object to define. People may have different views of it. The really key argument for Strong AI, which ultimately will lead to a profound transformation which we are calling the Singularity, has to do with the progression of information technology, which is really inexorable in both hardware and software, and I want to come back to that. But I will comment on a couple of other trends that were cited, because they are not direct analyses of information technology. One was the economy, which has been growing exponentially. It looks like small exponential growth, but that’s because we factor out the 50% deflation in information technology. I mean today for $50 you can buy a cell phone that among other things includes a computer that’s a thousand times more powerful than all the computation that MIT had when I was a student there. Measuring things in dollars factors out the tremendous gains of what you can get in a dollar. In fact, the World Bank has announced that we have halved the poverty in Asia due to tremendous power of information technology.

Life expectancy, which Norvig mentioned, is also not a direct measure of information technology. And it hasn’t really been a measure at all of information technology, up until just recently. That has gained, and the gains have been linear. Life expectancy was 37 in 1800. But it’s been a hit-or-miss process. It’s only been in the last few years that we can actually define biology as information technology. We collected the genome four years ago, and we have now technologies to simulate biology. We can simulate, for example, protein folding. That just happened in the last year. And technologies to reprogram biology the way we program our computers are brand new. RNA interference, new forms of gene therapy, and so on. Now that biology is an information technology, and that has not been true up until just now, it will become a thousand times more powerful ten years from now, and really will begin to accelerate far beyond a linear pathway that we have seen in life expectancy.

But getting back to AI, let’s look at the hardware and the software. I think the exponential gains in hardware are hard to deny. And it’s not just Moore’s Law. And I think there’s a very strong consensus now on the exponential growth of hardware. Even the most conservative analyses of the amount of computation that you need in order to simulate all the different regions of the brain will be achieved in fact within a few years. A key objection that is more complicated to respond to is, okay, we’re making exponential gains in hardware but software is stuck in the mud. Now, Norvig did not make this argument. I would have been surprised if he had. He’s head of research at Google. Certainly their research has not been stuck in the mud. But other observers have made this argument. And I point out many ways in which software has gained very dramatically, as well. For example, recent analysis shows that if you took today’s algorithms and run them on the computers of 30 years ago, they can outperform the algorithms of 30 years ago run on today’s computers.

So, in other words, software has made more progress than hardware for that type of algorithm. And in my book I site many other algorithms for which that’s true. And it’s not just efficiency, it’s also true of quality or intelligence. A very good example of that, which is actually quantitative, is chess. Take Deep Fritz, for example, recently, with one percent of the computation of Deep Blue performed as well because of improved software and pattern recognition. Another way to put it, Deep Fritz had the same amount of computation as Carnegie Mellon’s Deep Thought, yet it outperformed it by 400 points. And not because of more brute force, but because the pattern recognition quality was better. Understanding how the brain performs intelligence is not hidden from us. We are making exponential gains there. We are doubling the spacial resolution in brain scanning. We’re doubling the amount of data we’re gathering about the brain. And we’re showing that you can turn this data into working models and simulations. And that’s gearing up exponentially.

25 regions of the auditory and visual cortex have now been modeled and simulated, performing similarly on tests to human perceptual capability. There’s an impressive simulation of the cerebellum, which again performed similarly on tests to human skill formation. IBM has a project underway to simulate a slice of the cerebral cortex, and so on. These efforts are gearing up. It’s not my argument that we absolutely need this project, I do agree with those that say that we can achieve human-level AI completely ignoring the human brain. But I think it will accelerate the process. For example, in my own work in speech recognition, we gained a lot from actually understanding how the human auditory cortex processes information.

We are continuing to make exponential gains in hardware and software. People ask, “Whatever happened to artificial intelligence?” Well, there are hundreds of examples of AI in everyday usage which were research projects 15 years ago. And the consensus view is actually moving much closer to the position I’ve taken consistently, which is that we will achieve Strong AI, human-level AI, within a quarter of a century, by 2029. There was a conference at Stanford after The Age of Spiritual Machines about a decade ago. The consensus then was that it would be hundreds of years. Last year, on the fiftieth anniversary of the Dartmouth conference that gave artificial intelligence its name, we had a better way of assessing the consensus. We had these instant polling devices. And indeed there was a bell curve of when people thought human-level AI would pass the Turing test. And the consensus view was half a century. So, my view is still more optimistic than the consensus view, but they are actually pretty close together.

--

--

/r/21dotco

/r/21dotco is a place for news and discussion related to 21 Inc services and products. http://reddit.com/r/21dotco