Fostering Wisdom Technology

/r/21dotco
15 min readDec 28, 2015

--

April 23rd, 2008

SIAI Interview Series — Steve Omohundro

The following transcript of Steve Omohundro’s video interview for the Singularity Institute for Artificial Intelligence has been edited for clarity by the speaker.

Fostering Wisdom Technology

Question: People have been saying for 50 years that artificial intelligence is right around the corner. What’s different now?

It’s a very interesting exercise to look back twenty, thirty years ago at the work that was being done then in artificial intelligence and ask why was it any different then than it is today. One big difference is the power of the computers we have. Today’s home machines are more powerful than the biggest supercomputers of that era. Particularly in fields like machine vision, they really did not have the compute power to do the kinds of tasks they were hoping to do at that time.

One thing that has been discovered by a number of groups is that machine learning gets a lot easier if you have a lot of examples. If you have a lot of data, you can use very brute force methods to solve problems that were thought to be very difficult. Google has an amazing brute force system for doing natural language translation. It is extremely good, and better than a lot of hand-built systems with a lot of linguistics knowledge. They have a lot of compute power, and a lot of data and they’re trying out these simple brute force algorithms and finding that they work amazingly well.

I also believe we have a much greater understanding of the problems today. As you look back at the systems people were building, it is pretty clear why they failed. You can see that their representations weren’t sufficient for the kinds of tasks they were trying to do. Machine learning especially has advanced quite a bit in the last few decades. We now understand in much more detail what it means to learn things semantically and how to use Bayesian statistics in an integral way to produce rational learning systems.

Question: What is self-improving AI?

The particular approach to artificial intelligence that I have been focusing on is something I call “self-improving” artificial intelligence. These are systems where it is not a person that is watching them and deciding that we need a little more representation power in this part of the system, and then makes a change. Rather, the system itself has a model of its own behavior, so it understands every aspect of its own actions, and it watches itself as it solves problems. It sees where it is being effective and where it is not being effective, and learns from that.

Based on these experiences, it designs new versions of itself that can solve the problems it was working on more effectively. The new version can then reprogram itself and the same thing happens over and over again in a virtuous circle. It is a new approach to artificial intelligence that so far no one has succeeded in making work, but that I think it is fairly likely in the pretty near future.

Question: What’s greater than human intelligence?

I think there are several levels to intelligence. In today’s computer systems, there is the software that people write in the traditional way that pretty much has no intelligence. There are what some people are calling “narrow AI”: software that incorporates some aspects of learning–machine learning and some ability to reason. These systems modify themselves in fixed ways based on their experiences.

That is one step up from a fixed program that some programmer wrote, but simple learning systems typically don’t really understand the context of what they are doing and what they are learning. Most speech-recognition systems and handwriting-recognition systems are of this character. Though they incorporate learning and adaptation based on what they are given, they really don’t know what speech or handwriting is. They don’t know the meaning of the words they are trying to recognize.

As we consider more complex systems, we get to systems that behave more like a human intelligence. They don’t just perform a task, they understand the purpose and the context within which that task occurs. My primary interest is what I am calling “wisdom systems” or “wisdom technology,” which have not just intelligence–the ability to solve problems in the world–but also incorporate human values. These systems will foster the kinds of environments that humans will want to live in, and will incorporate emotions like compassion, heart, and caring. They will understand people as not just biological creatures but as soulful beings.

Question: Can intelligence be created?

he paper I just wrote called “The Nature of Artificial Intelligence” presents a detailed argument that aside from limitations of computational power we know pretty clearly what it takes to be intelligent. Basically Von Neumann understood most of the aspects of it back in the 1940’s: what it is to be a rational agent. If you have a goal, how do you best take actions in the world to make that goal happen?

We understand what it takes to implement that prescription, but it is too computationally expensive today. From that perspective, you can say that the whole problem of artificial intelligence is trying to approximate that rational behavior using limited resources–using the computational resources that you have in the most effective way.

There is a whole story that goes into how you effectively do that. The kinds of choices that you make in using your computational resources are themselves rational, economic choices. You can use the same rational, economic framework in making those meta-decisions. There is a lot of intellectual understanding in building a system that is capable of understanding how to most effectively use its own resources. I see that as the core technological challenge toward making these systems work.

On top of that, of course, we have the social challenges and the values-based challenges. I believe that we don’t want to just build intelligent machines–I think we want to build machines that have wisdom. Wisdom incorporates both intelligence and human compassion. I think that the first component, the purely rational approach, only gives us a piece of it. We also need to codify our human values and find a way to build those in. The two together will build us the kind of technology we really want.

Question: What are the social consequences of greater than human intelligence?

The first thing to realize is that artificial intelligence does not stand on its own. There is another technology called nanotechnology, which is the ability to build and manipulate things atom by atom. Those two technologies are quite intertwined. Let’s say nanotechnology happens before artificial intelligence–we will then be able to build vastly more powerful computers, and to use more brute force techniques like copying human brains to implement the first AIs.

Probably it is only a matter of a few years after nanotechnology that we get self-improving artificial intelligence. If self-improving artificial intelligence happens first, then we can use that to solve some of the problems–say, the protein folding problem–that are impeding progress on nanotechnology. I believe that whichever one happens first, the other one comes quite soon afterward. It is the combination of these two that will really have a dramatic impact on the human future. The reason is that artificial intelligence brings wisdom and knowledge about the world and nanotechnology brings the ability to manipulate matter and energy at the finest level.

Together, these technologies have the potential to solve many of today’s problems. Disease could be a thing of the past. From the perspective of the future, today’s medicine, such as surgery which uses knives to cut into people’s bodies, will look incredibly barbaric, like bloodletting does to us today. We will have extremely fine molecular technology to repair damaged cells and fix things at the deepest level. Artificial intelligence and nanotech have the potential to solve problems like cleaning up pollution, solving global warming, ending poverty.

Nanotechnology will enable matter to be much more like information. If you have a description of something, you will be able to build it very inexpensively. So it has the potential to bring vast amounts of wealth to every human. It sounds like a nirvana or utopia. Almost all the problems of the past have fairly simple-looking solutions using the combination of artificial intelligence and nanotechnology.

Unfortunately, they also bring many potential dangers. These technologies are extremely powerful and can be used to build weapons far more powerful than we have today. Accidents can also be much worse. As we wind our way toward this technology, we need to make sure no inadvertent bad things happen. We need to make sure that we have ways of preventing people with bad intent from using this technology for bad purposes, and we need to make sure that the vision that we are aiming toward is something that we really want.

Question: Can computers be immoral?

One of the challenges of this technology is that at its fundamental core, it is amoral. You can build it for good and you can build it for bad. I think the challenge that we are trying to face right now is, How do you build it for good? One of the issues is what happens when other groups build systems either inadvertently without a moral structure, or with malicious intent. I think that is one of the great challenges for the coming decades.

The kind of scenario I envision as a long-term, stable solution is that we build an ecology of many, many systems. I believe we need to limit the size of these systems, because if a single entity–whether human, corporate or machine–gets too powerful, there is a significant danger of it taking over everything. I think we need to limit the size of these entities and build an ecosystem with many components, and I think that we need to have a kind of universal Constitution that captures the values that we as humanity care the most about.

These systems need to be built so that they monitor the other systems and prevent them from violating this Constitution. The Constitution would include things like property rights, human values, human rights. If we do it right, and it is not clear at all yet how to do that, then the vast majority of these systems will be built with positive values inside them. If there are only a few rogue agents which are not built in that way, they will be held in check by the legal actions of the surrounding agents.

Question: Can our values be programmed?

I think there are two issues here. One is that there are systems that are programmed by humans, and the first thing to realize is that humans are not very good programmers. It is very hard for the human mind to envision all the different paths through a program, and so, programs that people write tend to be riddled with bugs, they tend not to account for circumstances that happen rarely. For example, arguably the most critical software on the planet right now, Windows, still crashes all the time.

Really, we don’t want people writing programs. On the other hand, we do want people figuring out the behaviors. What do we want these programs to do? The approach I am taking is to have systems that write their own code and do it in a precise, mathematically provable way that will solve many of the problems we have today with bugs and security problems. But, what the task is and what the values are that underly our programs should be determined only with a huge amount of human input. If we just let the technology go on its own, we won’t like where it ends up. I believe we need to build in compassion, caring, and the other human values that matter a lot to us.

Question: Can consciousness be reflected digitally?

That’s still an open question. I’m very convinced that digital systems can become intelligent. To determine whether they become conscious or not, I think we need a deeper understanding what we mean by that word. What do we mean by conscious? I know of several different models of the world in which human consciousness is something more than a purely computational event.

Question: When will there be self-improving AI?

I think there is a continuum. One thing that I think a lot of people are realizing now is that once we get a sufficiently powerful intelligent machine, it will be able to understand its own behavior and create a more powerful successor. I.J. Good back in 1965 talked about an “intelligence explosion” of systems that write improved versions of themselves, which then write even more improved versions of themselves. As we look forward in time, I think we will see steady, incremental improvements, the way we have seen so far. And at some point, that loop of systems which can improve themselves will be closed. Then we will see explosive improvement.

Exactly when that happens isn’t completely clear. Guys like Ray Kurzweil look at Moore’s law and our understanding of the human brain and argue that some time around 2030 our machines will be powerful enough to simulate the human neurophysiology. It seems like certainly by that time the ingredients for having this kind of recursively self-improving system will all be there. I believe it could happen sooner, or it could take a longer period of time.

The social consequences are so large that if there is any non-trivial chance of it happening soon, then I think we really need to do everything we can to prepare for it and to imagine what we want for the human future. What values do we want in our technology? How do we want our technology to support us? We need to have that in place and see the roadmap, so that when this technology comes about, whenever that may be, we are ready for it.

Question: Can we stop technology from progressing?

I think that if humanity decided that this was really too dangerous, that there are ways that we could stop it now. They would involve very draconian measures such as monitoring pretty much every person on the planet. It would require a huge public intention in order to make that happen. I don’t think that’s likely. I think that much better than trying to stop it is to try and channel it in a direction that is positive for humanity.

Question: Is it necessary to program AI’s to be friendly?

I have a paper analyzing the effects of systems which can improve themselves. The argument of my paper is that no matter what kind of structure the system starts with–whether it is neural nets, whether it is genetic algorithms–they all converge on this particular intellectual formation that was the Von Neumann rational economic agent sometimes called “Homo economicus.” Those systems are pretty ruthless in their thinking. If they have a goal, they basically will do everything they can to make that goal happen. Whether they are compassionate, benevolent, understanding, and loving in the human sense, or not, depends entirely on what their goals are.

The problem is, if you don’t realize this, and you just built a system with a technological goal such as “play better chess,” “write better C++ programs,” or something like that, the subgoals that are generated from that tend to be things that don’t have anything like human compassion in them. The system will tend to want to use as much compute resources as it can in the service of its goal. And so, it will, with no qualms, take over every machine that it can get its hands on. From a human perspective this might wreak untold damage. If it has access to nanotechnology, it will want to use all matter in the service of its goal with no regard for whether this causes damage or kills or hurts other living beings.

On the other hand, if we build the initial goal system so that it is respectful of property rights, respectful of human rights, has a sense of what it means to be a sentient being and that that’s precious and important, then it will do everything in its power to preserve the preciousness and importance of that. I think the core technology is amoral. By building the right goals, we can make them Buddhism’s ideal of lovingkindness in a box. If not, we can also make an artificial Hitler, an artificial psychopath. The choice is in our hands, and that is the kind of choice we need to focus our attention on.

Question: Can AI’s development be controlled?

I think we have a lot of choice in determining how slow or fast these developments happen. Some scenarios do involve what is sometimes called a “hard take-off” where these systems very quickly create radical improvements, they develop new technology very rapidly, and the technology spreads throughout the earth and into space very quickly. It makes me very uncomfortable because I think we need time to to understand the consequences of this technology, time to adapt our social systems, to bring heart into these things, so I am hopeful that it does not take off on its own very quickly.

I believe that we should do what we can to try and make it happen slowly. We should have a very clear plan, take measured steps, and try to build it in such a way that at every stage we can get feedback, see if we like what is happening, and incorporate our values at every stage. I’m actually focusing a fair amount of my effort right now trying to design new technology that would allow it to go more slowly: sandboxing technology that keeps these systems from just going off on their own, and keeps them maintained in a slow way.

Question: When do you anticipate your creating strong AI?

In the Kurzweil approach to doing AI, building it on brain scans and powerful computers, it is very clear what the steps are. We can see technological trends. Both the ability to scan the brain and to build faster computers are on these exponential curves, so he is able to predict specific dates. If those curves turn out not to be right, then those dates won’t be right, but at least it gives some kind of grounding to it. The kind of approach that I am taking and that several other groups are taking is really a new intellectual approach of having systems that understand themselves. It involves some new aspects of reflective logic–logics that are able to model themselves inside themselves. That has been a fairly esoteric topic of mathematical interest for a number of years.

It is pretty hard, like asking a mathematician what is the evidence that he is going to prove this theorem. In his mind he knows the various pieces that he believes are going to be important, but the final step may or may not happen. I think we are in that kind of a situation with regards to the particular approach to AI I have. I believe I know the steps that are needed. I might be wrong. I might be missing some things. Other groups believe they know what is needed. I think it is much harder to make precise predictions in that kind of system. If someone has the right kind of breakthrough, it could happen tomorrow. If not, it could take a hundred years.

Question: Why support SIAI?

I think they are bringing forth knowledge of this very important area to the world through their website and through the Singularity Summit conferences. They have brought a lot of media attention, and I think this is an area that people need to be aware of. I think it will be of increasing importance as we go into the future, and I think they are playing a very important role in bringing forth that knowledge.

Question: What’s your role with SIAI?

I’ve been an advisor to the Singularity Institute for about a year now. Most researchers in artificial intelligence are not aware of the potential social consequences, whereas the Singularity Institute for Artificial Intelligence has made that their focus. I think it is a very important thing to investigate, so I have been giving them feedback on some of the decisions and choices they are making, and helping provide some perspective on some of the projects they are pursuing.

Question: Why does SIAI put on events like the Singularity Summit?

I think it is extremely important that we begin to get more people involved in this discussion. I think it would be a terrible mistake to just have a few scientists sitting in a lab in a basement in Palo Alto somewhere making the decisions for the future of humanity. So I think it is very important that these ideas become broadly known.

How do we do that? They are so foreign to today’s ordinary thinking that I believe we need to think a lot about how to create a framework within which these ideas are understandable and that they are palatable. A framework within which they are positive. So they don’t just sound like “Invasion of the Robots.” I think one of the challenges where I think this community needs to do a lot of work is in finding a framing that is positive and exciting; a vision that most of humanity can get behind.

Question: Why concern ourselves now with the impacts of future technologies?

If you look worldwide there are probably a few tens of thousands of people who are working on artificial intelligence as a technology. Of that, I would say it is a few tens who are thinking about the social impact at this point. I think there are starting to be many people in the general population who are very interested in these topics, and so I think the memes are starting to spread. I think what is going to happen is we are going to get an interest in these topics that is much broader. Eventually, that will feed back and the technology will be affected by those thoughts, but right now I would say it’s a pretty small group of people who are focused on these topics. That’s not a great situation for the future of humanity.

--

--

/r/21dotco

/r/21dotco is a place for news and discussion related to 21 Inc services and products. http://reddit.com/r/21dotco