Roadmap to Artificial General Intelligence

Are you on team “top-down engineers” or “bottom-up biologists” in the race towards General Artificial Intelligence (AGI)?

Machines have exceeded human intellectual capabilities in many regards, but so far only for narrow tasks. However, the level of sophistication and complexity of these narrow tasks has dramatically risen. AIs winning at Go or recognizing details in image content experienced great development recently (even more now with AlphaGo Zero). The speed of development is quite impressive, almost shocking. With this progress, we could arrive soon at General Machine Intelligence. That is, artificial intelligence that exceeds a narrow task and allows human-like evaluation and development of complex tasks requiring different intelligences.

How can a machine learn general intelligence?

A machine’s intelligence is the result of data that it learned from, in a way that an algorithm requires it to. Instead of being a “rule-based” intelligence we have a machine “making its own rules”, or, more meta, “rule-based” acquiring of intelligence. It is obvious that human intelligence cannot be “taught” to a machine, because it does not follow a bounded set of clearly defined rules. A natural question to ask is whether human-like intelligence can be derived by large amounts of data or whether we need to understand biological mechanisms better, to reimplement nature’s engineering in a silicon environment. Or could it be a combination of both?

What is the role of imagination in AGI?

One of the major advantages of the human brain is the ability to learn (new things) in less iterations, than a machine does. One could argue that this follows by a large part from the ability to imagine, in which a human can relate a new encounter to similar concepts, that it has already “understood”. However, with this framing, it almost sounds as if imagination could be solved by throwing more data at it. But maybe imagination is much more complex and would require a breakthrough in biological understanding of the brain to implement something likewise in a machine.

Can Machine Imagination help with AI safety?

If intelligent machines would, as humans do, use imagination preceding their actions, would this be of interest for AI safety? If one could monitor the imagination of an intelligent operator (disregarding the ethics at this point), no matter if human or not, one might be able to stop or influence actions of this operator, as soon as one would notice trends into critical directions. But arriving at superhuman intelligence might in turn easily empower machines to identify how to pretend false imaginations, that don’t reflect their real intentions. But I guess this just illustrates how important AI safety is and will be.

Dive deep with our podcast on this topic

In this podcast episode we talk to Stephen Larson, Co-Founder of OpenWorm, and Tim Shi, Stanford Phd student engaged in AI research, about AGI. We touch on several topics:

  • Which approach is likelier to yield results: Biological Emulation or Pure Software and Data Engineering?
  • Engineering, startup, & corporate research vs. academic research
  • Potential impacts of AGI on society in terms of centralization vs. decentralization
  • AI-safety regarding AGI

Tim Shi is a Stanford University Computer Science Phd student and founder of moxel.ai, a machine learning social aggregation platform. He is enthusiastic and optimistic about AI and singularity.

Links: TimShi.xyz and moxel.ai

Dr. Stephen Larson is CEO of MetaCell, a bioinformatics software services company. He is a graduate of MIT in Computer Science and received a Ph.D. in Neuroscience from UC San Diego. He is also Co-Founder and Project Director of OpenWorm, whose mission is to simulate the body and neural network of the nematode C. elegans in a computer.

Links: OpenWorm.org and MetaCell.us

Listen to the podcast here at www.letsmakethefuture.com