OpenCog vs OpenAI — 2 Different Paths to Human-Level AI

Yves Bergquist
3 min readMar 2, 2017

--

I often get asked how the OpenCog project, which my friend and co-founder Ben Goertzel has started together with my other friend and co-founder Cassio Pennachin (and a few other really smart and dedicated AI engineers), compares to Elon Musk and Sam Altman’s OpenAI Project.

Ben wrote a great Quora post on this issue recently, so I wanted to share it for those of you wondering how many different paths are currently being taken toward human-level Artificial General Intelligence. For those of you eager to dive deeper into this, I really recommend reading Ben/Cassio/Nil’s “Engineering General Intelligence” (here: http://www.springer.com/us/book/9789462390263)

HOW IS OPENCOG DIFFERENT FROM OPENAI?

(by Ben Goertzel)

OpenCog is a few things:

  1. A software framework designed for the interoperation of multiple cognitive processes on a common weighted, labeled hypergraph knowledge store (the Atomspace)
  2. A project aimed at building a human-level (and more) AGI based on the design given in the book “Engineering General Intelligence vol. 2” by Goertzel, Pennachin and Geisweiller …
  3. A community of people, and a nonprofit foundation, oriented toward the above

OpenAI is much better funded than OpenCog currently, and is also currently focused on deep neural networks. OpenCog also has a deep neural net aspect (there is experimental work going on in Hong Kong and Ethiopia right now, integrating deep NNs for perception into the AtomSpace), but it’s not the central aspect of the architecture or project.

The first step to understanding the numerous differences is to note that OpenCog is founded based on a comprehensive model of human-like general intelligence, and a comprehensive overall plan for getting from here to human-level (and more) AGI.

On the other hand, OpenAI appears (from their public statements and behaviors) to be based on the general plan of starting from current deep NN tech, applying and extending it in various interesting and valuable ways, and in this way moving incrementally toward AGI without that much of a general plan or model of the whole AGI problem.

In OpenCog, we do plenty of this kind of incremental experimentation and tinkering, but we also have a clearly articulated high-level cognitive model and game-plan.

If you are convinced that formal neural networks are the path to AGI, then you will like OpenAI more than OpenCog. If you are open to an integrative approach in which multiple different sorts of AI algorithms operate together on a common representational substrate (including deep NNs but also probabilistic logic theorem proving, evolutionary learning, concept blending, etc.) then OpenCog may appeal to you more.

Another practical difference at present is that many members of the OpenCog community are currently working on applying OpenCog to humanoid robotics, in the context of collaboration with Hanson Robotics. On the other hand, OpenAI is focusing more on other sorts of problems. OpenCog has also been used in various other domains, it’s not tied to robotics; but a decent percentage of the current effort is oriented in that way. So if you want to play with humanoid robot control, again, OpenCog may be more apropos for you…

Finally, a difference community-wise is that OpenCog seems a fair bit more open in terms of its strategy, decision-making and so forth. Tensorflow is an example of an open-source project that could be described as “open source, closed strategy.” OpenAI is not so far in this direction as Tensorflow, but certainly much more so than OpenCog, which pretty much “lets it all hang out” in the spirit of the good old Libre’ software community.

--

--

Yves Bergquist

Co-Founder & CEO of AI Startup Corto. Director of AI & Blockchain @ USC’s Entertainment Technology Center. Member/Researcher, DSL@Columbia University