Will Superintelligence come with Superwisdom?

December 14th, 2007

We know that highly intelligent people can make terrible decisions. The question therefore arises: Will our emotional, social, psychological, ethical intelligence and self-awareness keep up with our cognitive abilities? Max More offered his thoughts by outlining the goals of the proactionary principle at the 2006 Singularity Summit at Stanford.

The following transcript of Max More’s Singularity Summit at Stanford presentation entitled “Cognitive and Emotional Singularities: Will Superintelligence come with Superwisdom?” has not been approved by the author.

Will Superintelligence come with Superwisdom?

Because of limited time, I’m going to start speaking at a normal rate and then I will increase superexponentially until I am going at a very high rate. Fortunately, Ray has a device he’s hidden in the audience, which will increase your perceptual speed at the same time.

The main focus of my talk is the fact that we may achieve superintelligence in the form of cognitive intelligence does not necessarily mean that we will achieve wisdom, or superwisdom. I think everybody here is familiar with the phenomenon where somebody highly intelligent, perhaps even yourself, has made some stupid choices. Shortsighted choices, overcome by certain desires, lusts or hungers; just not listening to your intelligence, not letting it take you in the right direction.

I think that is a real problem, because we should not assume that machine intelligence will be necessarily exempt from that. It will have different origins, maybe different kinds of problems. Apart from the question of when will machines achieve human intelligence, meaning “intelligence” in the standard cognitive sense, at least as important a question is when they will exceed or at least match human wisdom.

To avoid this problem of lots of power without responsibility, as Lord Acton used to warn us about, what we need to do is explicitly pursue that we might call “superwisdom” as well as superintelligence. Because superwisdom sounds a little bit like something from a comic book, the other term that I use is integral intelligence. I use that term to mean the collection of important faculties that include mindfulness, which again, you can be highly intelligent and not very mindful. It also means creativity, because you can have very sophisticated analysis without necessarily being creative. If you want to see all the options and make a wise choice, I think you also have to generate lots of alternatives.

Objectivity is something we can never completely reach, but we can approximate using principles. Of course, cognitive capabilities, your ability to use your intelligence to solve those problems. What I am calling extropic conation, by this I mean good intention, if you like, to put it very simply. And then, various kinds of social-environmental institutional support. Even if you are very smart, very wise yourself, it may be difficult to make the wise decision if everyone around you is pushing you in one direction, feeding you certain kinds of information, and rewarding you for certain kinds of behavior. This is certainly an issue within large corporations.

There are a whole bunch of foundations of foolishness, problems that I think potentially affect any kind of thinking being, including machine intelligence. The first one, bounded awareness, this is partly the mindfulness issue and partly a set of cognitive biases that we seem to be prone to. Despite the information being out there to make a wise decision, we may not actually see it. We may not want to see it.

I think an interesting example of that was a movie I’m sure a lot of you have seen called “Memento,” which I think is quite fascinating from a variety of viewpoints. The story is told going forward, but jumping backward at each step, about a man who has no short-term memory and writes himself clues and tattoos his body to remind himself what’s going on. You realize as you go through the movie that what he has done is at some point made a decision to fool his future self by faking information to give his life a sense of meaning. In other words, he is overriding his intelligence, what he knows to be true, because otherwise he feels his life is going to be futile.

We tend to ignore possible outcomes or consequences because we want to simplify the world. Things are very complex, especially with difficult kinds of issues like we’re discussing, so we tend to oversimplify so that we can handle it. This leads to certain biases involving low-probability events, which Nick Bostrom mentioned. We are actually very poor at events that are very improbable, but which may have very large consequences. We tend to ignore those. You can be very intelligent in the sense of reaching a goal, given a very narrow definition of what the problem is and who should benefit from it, but that may not be satisfactory to other people.

I think a major one that could be relevant here, and we will hope that machine intelligences are designed to avoid this problem, is not learning from experience, or hindsight bias. We are very good at using the hindsight bias to prove how smart we are, that our decisions came out correctly, because we remember things conveniently as to what we actually said in the past. It’s very, very common. What makes it a big problem in organizations is that nobody wants to keep objective records of what they decided other than for things like board meetings perhaps, and even those might be written up afterwards.

A number of people from this audience and on the stage like to make predictions. I think it’s very good, as Ray has had to do, to write them down publicly, and then you can follow up on them later. I know a lot of people who say, “I predicted that twenty years ago!” But if you ask them where I can see this prediction? You rarely can find that. That applies to everyday kinds of decisions too.

Another set of problems we have are theories of other people. We tend to suffer from these kinds of limitations in our perception. And then theories we hold about ourselves bias our perception of things. No matter how good our cognitive processing ability, it can lead us off in the wrong direction. The illusion of favorability, believing you are better than other people in certain respects. There are numerous studies which many of you will know about, showing for instance that 90% of people think they are better than average drivers. On just about any skill you take, people will think that they are better than average.

The illusion of control, being a very common one, that we tend to overestimate how much control we have over the future. Going along with that, we tend to over-explain things in terms of particular people causing certain things to happen, rather than perhaps being unintended outcomes, side effects, unforeseen consequences, or simply random chance. If for instance we find a cluster of cases of cancer in a certain area, most people will immediately say it must be because of the nuclear power station, power lines, or whatever it is that’s nearby. In fact, statistically, there are going to be certain clusters which have nothing to do with cancer rates.

A big one I think is a lack of access to emotional centers of the brain. Our reason and emotion are intertwined in a certain way, primarily in one direction, in that when we have an emotional response it connects to the cognitive centers of the brain and sets off certain kinds of thoughts. For instance, “I see a lion” or “I see a mugger,” and sets off fight or flight reactions, and you very quickly take an action.

However, we are very poor at going back the other way. We just don’t really have the neural pathways yet to go down into our emotions and modify them. Say, a feeling of anxiety I’m having is irrational, or I shouldn’t be terrified of flying in this plane and the chances are very low of having an accident. We can’t just switch off emotions like that. We have huge numbers of emotional problems in the population and it takes a lot of therapy or drugs to try and work on them. There are rational motive therapies and so on, but they don’t automatically work instantly. We have a chance of doing something about that.

Consequences of unwise superintelligent choices, for a being who is superintelligent but maybe not so wise achieving certain goals is going to be more costly. This kind of very narrow maximization you can see in a lot of kind of AI systems. Not Eli’s, I’m sure, because he will be a lot more considerate than that, but a lot of AI systems which are built for a specific purpose, not just as an intellectual challenge to create general intelligence, may be designed to do it very well. They may be very good at narrow maximizing, but they may not have very inclusive goals. They may just focus very narrowly on that goal and get the job done. It might be great if you wanted an assassin to do the job at all costs, but for most decisions that would not be satisfactory.

There is a big question of what kind of paths might an AI might take if not guided in ways I am going to suggest. One promising feature of these intelligences is that they are not the outcome, at least directly, of biological evolution and natural selection. Indirectly they are, because they are shaped by people and organizations who themselves were shaped by evolution. To the extent that is true, they won’t escape all these problems. Humans have a lot of specific problems due to our genetic comparatives. It tends, again, to make us too focused on certain goals — sex, food, shelter, and so on — and not, perhaps, think more long-term or more inclusively about our goals. I think that could lead to some of the problems I will mention at the end in one of the possible scenarios, where we use superintelligence to pursue certain narrow goals of ours which may not be necessarily wise in the context of our larger sense of what is important to us.

At least that is one promising thing. They don’t directly have this programming. You have the opportunity at least to create a new foundation, perhaps a healthier, wiser foundation, but it’s not automatic.

A range of solutions. Integral intelligence, again the idea of trying to when you are designing an AI or commenting on a project that is out there already, to encourage the consideration of all these other aspects of thinking and deciding, not just the cognitive element, as crucial as that is. Friendly AI, which is Eliezer’s focus, to build an AI’s infrastructure, its basic foundation, in such a way that it really cannot help but turn out to be friendly. Kind of a deeper version of Asimov’s laws of robotics, I suppose. Those were very basic laws that were supposed to prevent certain kinds of harm. I think Friendly AI will be a lot more intelligent than that and go a lot deeper. Essentially it would make it not in its nature to be harmful, and that would be very nice. So that is a big design challenge.

My current focus is the proactionary principle, as I call it. It is a kind of response to the precautionary principle. The proactionary principle I think should actually be part of Friendly AI programming, in the sense that it is a sense of principles for thinking wisely, both for individuals and for organizations, that tries to strongly encourage people to use these broader ways of thinking about problems, especially organizations dealing with major technological questions. Regulatory organizations, government bodies, international treaties, those kinds of things.

The proactionary principle grew out of an online summit we held a couple years ago. Some of you may have heard of the precautionary principle, although it is more well known in Europe. It does have an effect here also. It essentially says you cannot introduce any new technology, even the production process that affects the environmental human wellbeing, unless you can prove beyond a reasonable doubt that it will not produce any long-lasting pervasive harm.

Of course, that is extremely hard to prove. The proactionary principle responds to that and all the concerns we are discussing here about possible problems, all the way up to the level of extinction events. It responds to that by drawing on the most well-tested leading procedures to structure the decision-making process. In other words, there is a lot of research done on decision-making, and there are known methods for producing better outcomes. Some people are very good at some of those and don’t know much about the others.

What I am trying to do with the proactionary principle is to bring all of this knowledge together and get people to contribute to it from their various disciplines. It’s built into the principles, but those themselves have more specific applications. I have just created a field guide to actually implementing the principles. The idea is to embody what I call “the wisdom of structure.” Taking the best available knowledge, not the most popular but the scientifically most well tested knowledge, about how to decide, think, come up with options, and reach the optimal decision, and build that into a set of principles and a procedure which you could apply in lots of different circumstances and hopefully reach a wiser decision.

You can think of the proactionary principle as a prop for wise thinking. It is a combination essentially of critical thinking, creative thinking, and evidence-based forecasting, all drawing on the best available methods. The ten principles there are in brief:

Again, I want to stress objectivity as a principle to tackle these biases. You have to be very aware of these biases, both in individuals and in organizations, and how they might crop up in designed minds, then try to counteract those very explicitly. We are very far from having a full set of tools for knowing how to prevent these problems, but there are some ways to do that to various degrees. This principle is not fixed in that sense. It keeps drawing on the latest knowledge that’s well validated to provide better procedures.

The principle of openness and transparency is crucial, and it’s going to be hard to get everyone to agree on that with an AI project, because it may be seen as a competitive advantage to keep it quiet. A lot of governments will not want to say what they are working on, especially if they are military applications. But I think people like us who are concerned about these issues should pressure people to make their projects as open as possible. Even if they cannot be fully open, to open up as much as possible so that we can actually see if they are following these guidelines. Are they recognizing these various kinds of possible problems and threats and doing something about it? Having that kind of public discussion will reduce the chances of it going off in some unwise direction.

Essentially, you can see four possible futures. If we don’t get superintelligence and we don’t get integral intelligence or superwisdom, we have pretty much business as usual. Things will go on as they are. We will make slow progress. It’s going to be really ugly as we go along. I am a strong believer in progress, and I think we have made a lot of progress, but the world is pretty ugly at the current state. It has lots of room for improvement.

If we get integral intelligence without superintelligence, what would that be like? That seems possible. People who are a lot more responsible, especially on the organizational level, they make much wiser decisions. I think progress will be faster, because you would not have to undo so many things. The costs will be lower. There would be fewer bad side effects you would be sued for later on, and so on. Whereas, if you had superintelligence without integral intelligence, it’s going to be fast but messy. You would probably achieve a lot of specific goals very rapidly and efficiently from a certain point-of-view, but probably with a lot of these collateral effects that I have talked about.

Ideally what we might get is, if we make good progress on superintelligence and we can integrate that with integral intelligence so that it takes account of all these other factors, then we could end up with a pretty good world that’s not going to be perfect, because I don’t believe that there aren’t conflicts of goals among perfectly wise people. It should go a lot more smoothly and we should resolve issues a lot more quickly.

I can’t say what the exact outcome would be in terms of what would that world be like, because to say that with any degree of great confidence would mean that I am the wisest person in the world. What I think I can recommend is that we follow these principles which were embodied in the proactionary principle. That’s a process for reaching good decisions that clearly does not specify what they are beforehand. So I cannot specify what this world is like. All I can say is that we get there a lot faster and it should be a lot better than otherwise. In my view, it would include things like, despite some skepticism here, a life that does not have to end. It would include, I don’t like the word “immortality” particularly, but agelessness — where aging ceases to exist and you choose how long you live, apart from huge catastrophes.

It would also include re-engineering the body and brain in ways that improve on evolution. Evolution and nature have done a pretty great job in a way. There are a lot of problems. We need debugging pretty seriously. To me it’s fairly obvious that we have to go in that direction for the world to be really a smart and wise world. Again, I cannot predict that. All I can say is, let’s make wiser decisions as well as more intelligent decisions, and I think we’ll all be happy with the outcome of the Singularity. Thank you.


Originally published at metaverse.jeriaska.com.