Bringing Humanity & the Biosphere through the Singularity

/r/21dotco
8 min readDec 24, 2015

--

December 18th, 2007

With advanced nanotechnology and machine intelligence on the horizon, we face a future of vast change in our physical world and the world of the mind. But we need not abandon efforts to steer this future toward one which will work for both humans and the biosphere. Christine Peterson identified certain ground conditions needed for such a success in the context of powerful technologies in her 2006 Singularity Summit at Stanford presentation entitlted “Bringing Humanity & the Biosphere Through the Singularity.”

The following transcript of Christine Peterson’s Singularity Summit at Stanford presentation entitled “Bringing Humanity & the Biosphere through the Singularity” has not been approved by the author.

Bringing Humanity & the Biosphere through the Singularity

The title of my talk “Bringing Humanity and the Biosphere through the Singularity” I thought was a nice modest goal for twenty minutes. You notice next to my name I have the word “translator.” Most of the ideas I’m presenting here are not my own ideas. If there are errors, that’s probably my fault as translator, but there will be citations at the end for those of you who want to speak to the real innovators here.

What are the assumptions here? I’m going to assume that advanced nanotechnology and augmented intelligence, as described by our previous speakers, both arrive at some point. I’m not specifying when that is. What is the goal? The goal for this talk is to try to identify a pathway for the wellbeing of unaugmented humanity and of the biosphere. I got some flak from some people saying, “What about augmented humanity?” Well, I’m looking at the weakest entities here. It’s a canary-in-the-coal-mine approach. These are the folks who need the most help.

Given that we only have twenty minutes, rather than deal with the biosphere separately, I’m going to say the biosphere wellbeing is dependent on humanity caring, I would say. It’s a human value; it’s one of my human values. If it’s not one of your human values–I hope it is–but if it isn’t, strike that and put in what you value, whether it’s art, religion, or whatever your personal value is. We are looking for a pathway for humanity and for human values. This, of course, is a general approach, not a detailed plan. There is no detailed plan yet.

Augmented intelligence, we have at least three pathways. There may be more. You could augment humans. You could augment other species. Did you know there is a U.S. congressman who is worried about the Singularity? This is congressman Brad Sherman of Southern California, and I was speaking with him recently. This is one of the pathways he looks at. I took the opportunity to tell him a point made by Brad Templeton, who is here in the audience, saying, “Make sure, if there is anybody doing this, you do not want to use chimpanzees. Make sure you use bonobos. This is critical.” He takes this very seriously. He said, “Gee, why is that?” And I said, “Bonobos are much nicer people.” He really listened, because in his talk the next day he told the audience, “If we’re going to do this, make sure you use bonobos, not chimps.”

So, we’re cool. Congressman Sherman is out there. He’s on our side. The other approach though is software. That is the one I am going to assume in this talk, and the only reason for that is we need some simplicity in doing these thought experiments. It gets much more complicated with these other two approaches, so I’m just going to look at software. What would be the ground conditions for success? There is nothing wrong per se with having lots of brighter or brilliant entities around. That does not necessarily cause problems. There are already huge differences in intelligence levels among human beings.

What’s the problem? Well, the question is whether these weakest entities can hold onto the resources that they need. They need protection from physical coercion. They need protection from economic coercion, such as excessive taxation. You can tax someone into non-existence. The issue is power and how it is wielded. This is a problem we have today. It just gets a lot worse, I guess.

We have some experience dealing with very powerful entities. There are extremely individuals today who are wealthy enough to say buy one of the smaller countries. There are very powerful corporations. There are extremely powerful governments. They can and they do cause tremendous trouble. But we have learned some mechanisms for reducing that, not eliminating it. We have our Constitution, Rule of Law, contracts, property rights, balance of power and checks and balances strategies, mutual defense agreements, and game theory. The key thing here though is we do need the initial ownership of the current resources by human beings to be respected and enforced.

This is property rights. There are different kinds. There are computational and non-computational property rights. It turns out when you think about it that securing these non-computational property rights, physical rights, food, these kinds of things, in the long term depends on computational property rights if you think about it. We all know, regardless of whether you buy the Singularity scenario or not, the world is going to be increasingly computerized. Those computers are, god forbid, going to be running software on them. This is increasingly controlling the world around us and our resources. If the thought of that does not disturb you, then you don’t really understand software.

If our computers aren’t secure, we are going to have a very insecure physical world. Let’s look at computational first. What do you need? At the very minimum, those of you who do computer security know it’s a very complicated topic with a lot of layers. I’m going to look mainly at one layer, which is the operating system. You need a secure operating system. You need one structured such that a running program has by default no authority at all. There are techniques for providing what we call fine-grained access control to resources inside the computer. This is capability security. It provides you a framework with rules for the exercise and transfer of permissions. It is basically a constitutional system inside the software for property and contract. It gives you a minimum framework for “law,” enabling voluntary arrangements and enforceable contracts.

The nice thing about this law is that it is not like human law in the physical world. It’s more like physical law in the physical world. You can really make these things unbreakable inside the computer. What results do you get from doing this? You can confine a program to a virtual machine inside this computer. If you were to bring up augmented intelligence software on top of such a system, it could not do damage outside the system. It would be confined. Of course, many caveats and complexities are not covered here. Those of you who are into computer security who want to follow this up, there are discussion groups on this topic I can refer you to.

Let’s say you have a layer of secure operating systems. On top of that layer you build something called smart contracts. These are contracts, simple ones, embodied in software. They are automatically enforced inside the computer. Now, the challenge here is that the types of rules to be enforced have to be very simple. They have to be much simpler than the current legal code. For example, a typical real estate lease is far too complex to enforce this way. You would need to design your contracts and your rights such that the system can understand and enforce them. I believe there is work being done on a language that would be used for these smart contracts. If you pull this off, what you get is a software environment where force does not work. If you’re going to have an intelligence explosion, that sounds like a good place to me to do it.

So you have your layer of secure operating system. On top of that you have your layer of smart contracts. On top of that you would build automated, mutual defense systems. This is where the software world interfaces with the physical world. In order to have such a system, you would need a consensus on initial asset division, what is a violation of that, what response is merited, and only simple rules can be enforced. In order for this to work, again, the resources brought to bear have to be greater than the violators of the rules. Those who are on the side of the enforcement have to obligate themselves and some resources to make this happen.

In this space, you are now envisioning a world with multiple artificial intelligences, of course, and humans. Some of them are on the side of the “good guys” who are trying to do the defense, and some of whom are perhaps the not-so-friendly entities. On the not-friendly side you have got learning components. You are up against a very intelligent and changing entity. You have to have some change and learning on the side of the defense as well. But I’ve already said you can only enforce very simple things. How do you deal with that problem? Well, you could have a filter that filters this complex strategizing down to very simple enforcement.

The other problem is social engineering. How do you break into a system today? Why not just look on the person’s computer and see the post-it note with their password on it? Call them up on the phone and say you’re the IT guy and need their password. That’s social engineering. You can trick people into giving up what they think they want. How do you get around that when you set up your defenses? You can set up something and throw away the key. Make it so that people who set it up cannot change it. You can require a large super-majority to change it, or a long cooling off period to get the stability you want in these systems.

The goal is, and it sounds awful, but I don’t really see any way around it. You need to have some kind of enforcement system that actually reacts violently to inappropriate violence by any entity. That’s defense, right? How do you build something like that? I have heard the phrase, and this was the title of the first talk on this topic by Mark Miller, “Computer Security as the Future of Law.” How do you do that? What we have here is the E Language, one tool that is out there. There are a couple of these capability operating systems that you can use. There is a website about capabilities, and there is another website about contracts. There is another thing I didn’t put on here. I did a little Dummies Guide to capabilities myself. I can give you that if you are interested.

This is my last slide. You know I’m from Foresight Nanotech Institute. You may wonder, why am I talking about this? For one thing, Eric did a great job with nanotechnology earlier today. But also, these topics interact. When you think about the long-term in nanotechnology, we have heard about nanotechnology weapons, you end up thinking about automated mutual defense systems. It has to be automated.

Here are some resources for those of you interested in tracking this. At the bottom is the URL for the roadmap project Eric mentioned. If you are interested in technical information on nanotechnology, we have a very technical conference in the spring. There is always my blog, where I track this kind of thing, and the main website. But the main thing I want to bring your attention to is the middle dot here, the Foresight Vision Weekend. That’s where we try to deal with these more ambitious topics. We have a lot of attendees here today. It’s kind of similar to John Smart‘s conference The Accelerating Change conference in a way. We are inviting all of you, of course, and all the Accelerating Change folks to join us at the Vision Weekend. Thank you very much.

Originally published at metaverse.jeriaska.com.

--

--

/r/21dotco

/r/21dotco is a place for news and discussion related to 21 Inc services and products. http://reddit.com/r/21dotco