Artificial General Intelligence in Virtual Worlds

December 11th, 2007

On November 4, 2007 at the Vision Weekend Unconference held by the Forsight Nanotech Institute, Ben Goertzel spoke on the subject of creating artificial general intelligence in virtual worlds. His company Novamente has partnered with the Electric Sheep Company to roll out virtual animals such as dogs and parrots in Second Life. He believes that gradually climbing the scale of behavioral and cognitive complexity, eventually it will be possible to create human-level virtual agents in online virtual worlds.

The following transcript of Dr. Ben Goertzel’s Foresight Vision Weekend “Artificial General Intelligence in Virtual Worlds” has not been approved by the author.

Artificial General Intelligence in Virtual Worlds

I’m going to talk about some work I’m doing right now, which is the practical stuff of running AI’s in virtual worlds such as Second Life. I’m also going to talk more broadly about what I see as the long-term future of AI in virtual worlds and why I think AI is important in terms of helping virtual worlds where they need to go, why I think virtual worlds are important in terms of helping AI overcome some of the obstacles that it’s faced.

I’m going to talk in general about why I think AI’s and virtual worlds are a good partnership. Then I’ll talk about the specific application we are making now, which is virtual animals in Second Life. Then I’ll talk a bit about natural language processing and reasoning in AI and what we can do with it now without virtual worlds and embodiment, why I think virtual worlds and embodiment can help us overcome some of the issues that arise there. For those of you who have seen me give talks on this recently, for example at the Singularity Summit, the virtual god had not yet been unveiled. We showed that off at the Virtual Worlds Conference a couple weeks ago. But the new content here is some things about language processing and reasoning, how they tie in.

We are starting off our exploration of AI in virtual worlds with virtual animals. That’s just the beginning. You can think of a whole bunch of other interesting things to do, from virtual talking birds, virtual shopkeepers, bartenders, job recruiters, a digital twin that stays online all the time and does whatever you’d like to do when you have to be in your First Life and can’t be in Second Life. Ultimately, what I’m aiming at is virtual scientists. For those of you who saw Melanie Swan’s talk yesterday, she was showing some nanotech stuff that’s going on in Second Life right now. What if you had a research lab in Second Life with virtual scientists analyzing data, showing it to people, and making scientific progress faster than human scientists?

There’s a heck of a lot of possibilities and it’s something that we can make real right now. AI software is real, virtual worlds are real. It’s not something that we need to make roadmaps and action plans and theorize. We can actually get out there and build the technology, put it out there for people to interact with. One of the interesting things you can do with it is overcome many of the bottlenecks that exist, in terms of having computers interacting with people using natural language.

A brief bit on terminology before we get into more interesting things. I like to talk about “artificial general intelligence” as distinct from what I call “narrow AI.” What I mean by “AGI” is AI systems that really have a sense of autonomy, of their own selves, of their relationship to the world, and are able to conceive of a problem and figure out how to solve that problem, even if the problem was not known to the programmers who created the program. This is a different thing from the narrow AI program, which may be very smart at solving some particular problems. Google is great at translating keywords into lists of URL’s.

The programs that the various researchers are playing around with in the context of the DARPA Grand Challenge, these programs are increasingly good at driving cars, which is a good thing. As I always say, I would like to see every other driver replaced by a computer. Chess playing programs can beat the hell out of me at chess. We haven’t solved go yet, but I’m confident that it will be solvable. But each of these programs does one thing. What you really want is a program that is generic, not just by being a grab bag where you put fifty different specializing programs in one container, but something that can generalize and abstract beyond many different individual problem domains, so as to be able to confront new problem domains that were never even dreamed of when it was first written.

One of the most surprising and frustrating lessons in the history of AI is that success at narrow AI, solving particular problems, is not all that much use if your goal is to solve general AI. That was a shock. In the ’60s, people thought it would be. People thought if we can solve chess and algebra, of course we can solve walking down the street, playing fetch, playing 20 questions, recognizing a face, going to kindergarten. To our surprise, it was easier to make computers that could make chess and take integrals and derivatives than that could pass kindergarten. That’s a nontrivial lesson as to how difficult these things we take for granted are. The reason they seem easy to us is because we have so much awesome genetic programming to help us handle them, but that doesn’t mean they are fundamentally easy.

I’ve been putting in some effort in the last couple years trying to catalyze the AI research community toward addressing these questions of general intelligence, rather than just focusing on solving particular problems. We have a conference that Bruce Klein, who is my collaborator on Novamente, our AI company, and I are organizing. It’s a conference in Memphis, Tennessee from March 1st to 3rd, 2008, which is the first serious research conference focused only on artificial general intelligence. We have around fifty academic, university, and industry researchers coming in from around the world to discuss the research pertaining to building general intelligences.

When the AI field was started, way back before I was born, the focus was on building thinking machines. For practical reasons, the field has drifted more toward narrow problem solving, but a number of us involved in the field are trying to direct it back. Let’s say you buy that that’s the right thing to do. Instead of focusing on narrow problem solving, we should focus on general intelligence: make a thinking machine as smart as people, or ultimately smarter. What’s the right way to do it? Of course there is no common agreement on that. There are some people at Google saying Google is an AI company, they’ll make smarter and smarter search engines until eventually it will be able to understand natural language like a human. That’s one avenue. Another avenue is physical robotics. Some thinkers such as Rodney Brooks will tell you that’s the only real path: only by engaging with the physical world with all its sensory and motoric richness, like we humans do, can you get an AI system with a general intelligence similar to humans.

My own view is somewhere in between those two things. I don’t think a body is necessary for AGI. I think a body is very useful for AGI, especially if you want your AGI to be anything like a human being, since we are rather tied to our bodies. I see no reason why you could not make a massively superhuman AGI that did nothing but prove math theorems and whose whole input and output was mathematics. However, it would be hard to figure out how to create that thing, it would be hard to debug it, and it would be hard to relate to. It would have a hard time relating to us. There are many practical advantages for embodiment. It’s an overstatement to say it’s necessary. The next question is: what kind of body do you need? Do you really need a physical robot? Is a robot good enough? Do you really need something which has skin and all the richness that a human body has, kinesthesia and so forth?

Or, on the other hand, is a virtual body good enough? Something like this little girl here in Second Life, obviously that doesn’t have the same richness of experience as a real little girl. On the other hand, maybe it provides what is needed for an AI. And maybe is it doesn’t this year, it will next year, and the year after that, because the rate of improvement of online virtual worlds is extremely fast. Much more so than the rate of improvement for physical robots. So my inclination is to believe that virtual world embodiment is rich enough as an embodiment vehicle for AI’s to confer most of the advantages you’d get from physical embodiment, and it’s just a heck of a lot easier to work with.

I started fiddling around with embodying our AI system, the Novamente cognition engine, in 2005. The basic AI system I’m working on, the Novamente engine, we started working on in 2001 and have been doing it for awhile as a background R&D project while we focused in the foreground on various AI consulting projects. Recently, the AI engine itself has moved into the foreground of our efforts. Our first experiments with embodying the AI engine, we used the open source game engine Crystal Space, built a world on top of it. The AI controlled the little guy you see there, and the big guy is controlled by a human. This was actually a screenshot of an experiment, which we did to try to teach the AI something that psychologists call object permanence.

To put it simply, you want to teach it that when it’s not looking at something, the thing still exists, which is hopefully obvious to everyone right here. It’s not obvious to a five or six month-old baby. Human babies learn that somewhere between six and nine months, generally speaking. Our AI had to learn that, too. It didn’t know that, it just learned it through experience. When something is hidden, it looks in the same place, and more often than not the thing is still there. If not, it may be somewhere else around.

Of course, not all virtual worlds would have that property. You could build a virtual world without object permanence, but we chose not to because we wanted to teach it that. One thing you notice about a virtual world like this single room we built is that it’s kind of empty. There is even less going on there than on Second Life. It seems clear that public virtual worlds provide a lot more avenues for teaching AI’s interesting stuff. There are people building stuff there all the time. There are people and other agents walking around, doing things. It’s not as rich as the real world, yet. Ultimately, it may be richer than the real world. It’s certainly much richer than a single room you build as a kind of toy domain. One of the most interesting things there is the potential to have tens of thousands, hundreds of thousands, millions of people teaching your AI’s, interacting with them, teaching them both explicitly by correcting them when they do something idiotic, or just implicitly by giving them examples to watch and interacting with them.

This can be something very powerful. If you think about it broadly, Google has much of its strength from what people put into it. We build links on webpages, we upload text through Gmail, we click on ads. That information from human activity is what makes Google powerful. Wikipedia, of course, is the same way. People are putting that information in there. The computer tools are serving as a medium to let human intelligence be properly organized so as to help enhance other humans’ intelligence. If you have human beings teaching AI’s in virtual worlds, in a similar way you have many human beings all collaborating to make these AI’s smarter and smarter. Since the AI’s all can be connected on the back end, what you teach one AI will ultimately benefit other AI’s.

The community of virtually embodied AI’s can get smarter and smarter based on the collective teaching of all the people online. This is a very powerful dynamic. It could be achieved with physical robots, it’s just more expensive to create a bunch of physical robots, distribute them all over and get people to use them in their everyday lives. Whereas online, once you’ve created one, replicating it is significantly less expensive, while there is computer hardware cost, of course.

We are starting out with virtual animals, which are letting us deal with integration of perception, cognition, emotion, personality, social interaction and very simple language, in terms of accepting commands. Step two of this, with luck, we would roll out a year from now. This would be virtual talking animals. The vision I have in my mind is a virtual talking parrot. I figure if it’s language facility is kind of idiotic at first, the end user will be forgiving since it’s just a bird. The fact that it can talk at all is sort of impressive. The potential of these birds for language learning can be pretty awesome. If you have, let’s say there are 50,000 people in Second Life now. Of course, Second Life is just one virtual world. But just to take that as an example. Let’s say 20,000 of them had virtual birds, which, depending on the business model, is not all that infeasible.

Things can be adopted very rapidly in-world. You’ve got 20,000 people online at any given time and the virtual bird is listening to them. They might be giving information about what is going on somewhere else, what their friends are doing. That’s an awful lot of teachers. It’s a lot more than any human baby has. From an end-user point-of-view, we believe people will like having animals having some individual personalities, so the software object encapsulating an individual animal has something called a personality filter attached it, which controls how much of the collective animal unconscious gets to filter into that individual animal. Some people think the human brain works that way too, although I don’t subscribe to it.

Another product that is a little further down the road that we’re very excited about is virtual babies. This was actually the most tame virtual baby I could find in Second Life, given the nature of Second Life culture. I think that has a lot of opportunity as well, giving people a virtual baby. Say it’s not that smart at first, have it for a few years and it will improve. It will get more and more intelligent as you teach it. And if you don’t like it, you can just throw it out and get a new one. It doesn’t work with real babies, at least not in this country. Mostly I’m going to talk from an AI point-of-view.

From a virtual worlds point-of-view, I think AI makes a lot of sense, too. I’ll spend about a minute on that. If you go one Second Life right now, I do believe tools like that are the beginning of the metaverse. This is going to have a huge impact on the evolution of intelligence on earth. There’s a lot of problems with it right now. Some of them are just software problems. The server is not as scalable as it could be, the learning curve is measured in weeks rather than minutes. Another issue is just that the virtual worlds are kind of empty. If you know where to go you can find interesting stuff going on, but they’re not exactly teeming with life like the rainforest and so forth. I think adding AI’s can make virtual worlds a lot less empty: ambient wildlife, opponents for in-world games, companions to show you around, virtual shopkeepers and so forth.

AI has a huge potential to increase the usefulness of virtual spaces out there. Many of the virtual spaces are empty because whoever built them didn’t want to pay someone to staff them. Of course, what you want is for communities to spontaneously evolve, but just as with any other kind of organization, having a seed to help catalyze the organization of the community can be important. If an AI is sufficiently non-idiotic, it may be able to serve that role. There are a whole bunch of possibilities here. If we sat here for an hour, we could dream up a thousand possibilities for AI’s in virtual worlds. Some of the obvious ones are salespeople, customer service representatives that help you walk through the problem with your computer by looking at a virtual computer in Second Life.

“Here’s the power cable. Your computer has one of those. Is it plugged in?” Tour guides to replace search in Second Life. Matchmakers who can go hit on chicks for you and absorb the pain of rejection. My favorite idea is the virtual Amway salesman. Have an AI salesman whose goal is to sell other salesmen, whose goal is to sell other salesmen, whose goal is to sell other salesmen. And ultimately, when you go all the way down the hierarchy, what the final salesmen are selling is repellent spray to kill Amway salesmen. It’s my ideal plan for crashing Linden Lab’s Second Life servers.

My company, Novamente LLC, is working with Electric Sheep Co. We have our partnership aimed at rolling out a series of more and more intelligent agents, not just in Second Life but in other virtual worlds over the next few years. The first thing we are doing is virtual animals. We have not quite decided on the business aspect of it, exactly how thy will be introduced to consumers and what the business model will be. We are discussing a bunch of possibilities. We plan to launch this likely in the second quarter of 2008.

The virtual animals that we are playing with now are dogs, although the AI is pretty much generic. We could apply the same AI to pretty much any animal that can operate on the ground. We haven’t done a lot of 3D stuff yet, in terms of birds flying around. They have spontaneous behaviors: each animal has its own goals, and it tries to learn plans to fulfill its goals. Their in-built goals are things like “I want food, I want water. I’m bored, I want novelty. I like social interaction.”

We don’t actually have reproduction in there yet. We’re worried about what the Second Life human population might do with it. In addition to spontaneous goals, a key feature is the ability to learn new behaviors from human teachers. There are three aspects to that. One is copying what people do, what I call “imitative learning.” The other is “reinforcement learning,” when it stands up when it’s supposed to sit, you correct it. The other is physically corrected learning. If the dog is supposed to sit and it gets up, you push it back down. Through these three kinds of interaction you can teach the virtual animals to do different things.

That brings us to the machinima that I showed. This machinima was made by a combination of AI and cheating. The state of our project right now is we have the AI learning engine working on the back end and Electric Sheep is still working on the proxy code that connects Second life to our AI code. It works but it still has bugs. The limitations were patched here with some direct LSL script. This is part AI and part smoke and mirrors, but it’s illustrative of what a small subset of the product is intended to do.

Now, he is trying to teach the dog to sit. He gave it a command that said, “Hey, pay attention to me.” He says, “I’m sitting.” The dog understands a very limited subset of English, but it understands that that means “I’m giving you an example of some behavior. Pay attention, record what you’re seeing, because I may ask you to try that again later on.” It watches you do it, then you say “Try sitting” and it tries it. It doesn’t have sitting built into it, but what it does have, which is kind of cheating, is the mapping between some animations that a human avatar could do, but it’s at the level of limb movements. It’s not at the level of “sitting” = sitting. It’s at the level of “that’s his leg, that’s my leg.” He contracts his leg, that’s equivalent to my contracting my leg. He turns his head, that’s equivalent to me turning my head. It maps an animation and the animation corresponds to an individual motor action. There is no other way to do it in Second Life because animations are done by running animation scripts, rather than using bones-based animation. When he says, “Try sitting,” the dog knows it’s supposed to try it. Then he gives it reinforcement, saying “good” or “bad.”

One of the easier ways to teach the dogs is to recruit an accomplice, where the accomplice will act something out. Then it’s supposed to copy the accomplice. It means a dog doesn’t have to work to disambiguate what you’re doing that it should copy from what you’re doing that is your behavior. If you’re playing fetch, you don’t want the dog to imitate throwing the stick. You want it to imitate getting the stick. The AI can disambiguate those things automatically. In a case like fetch, it’s actually easy for the AI. In general, it’s easier for the teaching function and the exemplar for imitation function are separated.

There are a couple dozen parameters that govern a dog’s personality. I went through a model of human personality, then the models from psychology of human emotions. I was amused to find that pretty much every human personality trait and pretty much every human emotion is found in dogs. Dogs can be jealou, they can be resentful, spiteful, as well as just happy or sad. So one of the things that we’ve done initially is to hard code various emotional reactions to the situation. But they are coded inside our AI’s knowledge representation so that they can be adapted and modified based on the dog’s experience. Even if the dog is initially wired to be jealous about certain things, it could learn not to, if it experience guides it.

One thing I’m not going to have time to talk about in the next five minutes or so of the presentation is the actual AI learning engine underlying this. This is something I’ve talked about in previous conferences and is a big topic unto itself. We have our own whole integrative AI approach, which we are intending as an architecture for a true thinking machine. Many years of research and thinking have gone into that, but just to give an extremely high-level overview that will mean a little bit to some people that already know something about AI, the knowledge representation is a weighted label hypergraph: it’s nodes and links.

You could view it as a midway point between an attractor neural net and a semantic net. The nodes and links have probabilistic truth values attached to them. They also have what we call “attention values,” which indicate the short and long-term importance of a node or link to the system. There is a certain non-linear dynamic that updates the long and short-term importance values based on the system’s activities and decide what the system should pay attention to, what it should keep in memory. There is a probabilistic logic engine that updates the probabilistic truth values of nodes and links. And then there is an evolutionary learning system, which uses a combination of probability theory with genetic programming. That’s something called MOSES.

MOSES learns new procedures for carrying out tasks, but then can represent the procedures with probabilistic knowledge to be reasoned on an stored in the node and link knowledge base. It’s a complex integrative architecture using probability theory as a common language to mediate between logical reasonin, evolutionary learning, a hypergraph knowledge base and a few other simple things. We have the belief that this AI architecture is going to be adequate to actually make a human-level thinking machine. I’m not going to try to defend that in the next 37 seconds, but you can look at, which is the website of our company, and there’s a number of conference white papers, six or eight pages from various AI conferences, which will not satisfy you thoroughly, but if you look at that stuff and are intrigued I can give you access to some proprietary material on it. We haven’t published too much on it, but we’re not all that secret, if people are curious.

We have a book coming out next year from Springer-Verlag on probabilistic logic networks. That’s a whole domain unto itself. Getting back to the pets, although I would rather talk about the AI system, as it’s a longer and deeper topic. The software architecture that we’re using looks like this. The virtual world is at that end. There’s the proxy, which goes back and forth between Second Life and the pets. We have a server pool which actually controls the pets. That has an object for each pet and schedules between them.

That contains short-term memory of the pets, the procedures that the pets already know how to do, which are then triggered by various situations the pets find themselves in. The collective experience server over to the right is kind of the collective unconscious and long-term memory of the animals. Then, for learning, we use hill climbing, which is a simple, fast heuristic for surfing through the space of possible procedures to control actions to achieve goals. And then MOSES, the probabilistic evolutionary learning algorithm. This is slower to run, but can solve harder problems.

The first release is going to have all those. The things that are in red are stuff that exists but are more prototypes and we will make into the second release. The first version of our virtual dogs won’t actually have a logical reasoning engine in it. It will just be evolutionary learning to achieve goals and to do tricks that was taught to it. Then in the second version, we’ll integrate the logic engine that we built for other applications in there. Object recognition is something else we have not built yet, but need to. Some objects in Second Life are weighted with meta-data, but ultimately you want to look at a glockenspiel and figure out it’s a glockenspiel. That’s a whole subset of AI in itself, but it’s easier in Second Life than with real image data, because you’re dealing with polygonal meshes.

This is a picture of what a scene in Second Life looks like to the dog. It isn’t as pretty as Second Life, but it’s a two-dimensional map, kind of like the back of the dog’s retina. In terms of developmental psychology, if you look at Piaget‘s stages, for the dogs we’re kind of between the infantile stage and what Piaget called the concrete operational stage. The real trick is to get to what he called the formal stage, where you have things that can reason and hypothesize. We’re not going to get there with dogs. It’s what we’re hoping to do with talking parrots.

I want to give something new for those who have seen this spiel before. Why do I think virtual worlds are important for AI? Part of it is that you can get a lot of people to interact with your AI’s. Part of it is that hopefully you can get people to pay for various things associated with AI’s: new behaviors for them, new tricks, and so forth. Part of it, though, is I think AI can get past its bottleneck of human language understanding and human commonsense understanding through embodiment in virtual worlds.

I spent a lot of work doing natural language processing for corporate customers, because Novamente needs to make money as well as trying to build a superhuman thinking machine. What we find is we can do a lot of cool stuff with natural language processing but you get stuck on some problems, and here are some of them. Our natural language processing system can parse a sentence correctly almost every time. The problem is that it also parses it incorrectly almost every time, and selecting the right parse out of the list of fifty parses of a sentence from out of Proust can be difficult.

Disambiguating word meanings can be hard, especially for adverbs and adjectives. Disambiguating prepositions is like uncharted territory. There are almost no research papers on that. “I ate lunch with a fork.” “I ate lunch with my friend.” “I ate lunch with salad.” “I ate lunch with great enthusiasm.” All those meanings of “with.” There is no computational linguistic systems that deal with that. Reference resolution is another thing. “The terrorists blew up my house, then the bastards clobbered my dog.” “The bastards” refers to terrorists. That’s nominal reference resolution. We’re pretty good at that, but the AI’s aren’t good at that. There are a whole bunch of language phenomena. Comparatives are another thing. “I eat more than you.” Does that mean I eat more food than you eat, or I eat more than 70 kilograms, or whatever you are? It’s obvious to a human, maybe not to a cannibal, but it’s obvious to humans in our culture.

Google is a great system but it hasn’t solved this problem. I like to play around with using Google as a question answering system. Ask Google how many years does a pig live, and it’s not too bad. “Some pigs live 15 years. Guinea pigs live one and a half years.” Ask it how many years does a dead pig live, and it doesn’t get it. It gives you all kinds of weird stuff. “How many years does a pig live in captivity?”

That’s simpler, but it still doesn’t get that. It gives you stuff about alligators. Even if you’re not asking a trick question, it doesn’t understand modifiers or context. Some people are trying to overcome this problem. Barney Pell and his colleagues at Powerset are working on that. You can ask it questions like, “Who mocked Blair?” Since Tony Blair from Britain is the best known Blai, it finds examples of people making fun of Blair in different metaphors. That’s reasonable. It’s a step beyond Google. But it still doesn’t handle most of the problems that I cited on the previous page.

Just to go into this in slightly more detail, if you feed a sentence like “Guard my treasure with your life” into our language parser, this is the kind of thing we’ve been working on for the virtual dogs, actually. Version 1 isn’t going to use this language engine, but we’d like to do that. Our language parser knows “your” and “life” is a possessive, “treasure” is the object of “guard,” “with” binds “guard” to “life,” and so forth. Our language parser can do that. We have some semantic mapping rules that try to figure out what it means. The protection is “you.” The asset being guarded is “treasure.”

So it can actually map out a lot of the semantics of the sentence into relationships. We’ve used this stuff to build various products, but think about the difference between “Guard my treasure with your life” and “Guard my treasure with your sword.” They’re not exactly the same, but they’re mapped the same by our NLP system right now, based on beyond cutting-edge NLP technology.

How about “Guard my treasure with your uncle”? You could be picking up the uncle and clobbering the thief with it. On the other hand, it could mean you and the uncle together are doing it. That would be mapped differently. That requires disambiguating “with” with the common sense that the uncle is more often the agent of guarding than the tool. We can actually handle this example, but it’s a simple example. The problem is analogous phenomena occur with much more complex things. I made up a paragraph here an hour ago, just to show the combination on phenomena.

This is not that hard a paragraph. “I have a magical broadsword in my closet. It’s stronger than you think.” There’s a problem right there. Is it stronger than you think it is, or is it stronger than your power of thought? “Protect my treasure with it.” Now, that means you know what sense of “with” is intended. It also assumes “it” refers back to “broadsword,” rather than “closet.” “After you get rid of them, lie down with it, go to sleep…” “Go to sleep” is an idiom. You can assume the system knows that. “Don’t let go of it.” It has to know what “it” is. Not the rock that you’re sleeping on, but the sword. “After you sleep, if you’re too weak to lift yourself up (the gravity field varies here)…” Now, is that on the rock, or the planet you’re on. That’s kind of a context-dependent assignment of an antecedent. “Stick the thing in the ground and get up with it.” The thing has got to be the sword, right? We know that, but that’s because we have a picture in our mind and a knowledge of what’s happening.

A simple paragraph like this, an intelligent five year-old child would just get it, right? The most sophisticated Ai’s right now don’t get it because language is full of all these referential phenomena. They don’t assume context and common sense. My belief is that the way to get an AI to understand this kind of language is to create an AI that can experience these kinds of events. I think a virtual world is good enough to let an AI experience those kinds of events, which is why I think virtual worlds are good enough to get you to general intelligence.

If you can help the system experience enough things to understand language at the level of a young child, then you can talk to it and teach it all the things that you want. I think the obstacle we need to overcome is not passing the Turing test, but being able to talk to it like you can to a five year-old kid, so that you can teach it more and more. How do you overcome that obstacle? The problem is overcoming all these funny phenomena of reference disambiguation and so forth, which are needed for us to be able to communicate compactly, sensibly, and intuitively with another being.

The kind of reasoning we can do now if we cheat a little bit and tune everything right, we look at simple English Wikipedia, which is a version of Wikipedia with very simplified language. We want it to learn that eating chocolate makes you feel more awake. We want it to learn that because it knows that caffeine makes you feel more awake, and caffeine and theobromine, which is in chocolate, are related. You can do that if you use simple English Wikipedia, where you have simple sentences like “Caffeine is a stimulant drug.” You can get the nodes and links inside the system. Chocolate contains theobromine; caffeine is closely related to theobromine; caffeine is a stimulant. You can put all that stuff together inside the system’s reasoning engine and it can figure it out. That’s fine, but if you look at the real Wikipedia, rather than the simple English Wikipedia, then you’re back to sentences that are more difficult.

The idea that you could get that through a sufficiently large corpus is obviously true. How large is the corpus that Google would need? Is it ten quintillion or ten nonillion documents? The fact that it can’t even answer now how long will a pig live in captivity tells you something. There are not enough sentences directly of the form “a pig in captivity lives up to this many years.” That information is implicit among a lot of sentences, and to get out that implicit knowledge requires you to be able to solve all these problems. It comes down to how big does the corpus have to be? My guess is that as long as texts are produced by human beings, we’re not going to produce enough, because we only reproduce at a certain rate.

We’re out of time and I’m out of presentation, but if you have more questions, I’d be happy to answer them.

Originally published at