DeepMind’s misleading campaign against innateness

Gary Marcus
3 min readFeb 25, 2018

--

DeepMind’s new paper on learning a “machine theory of mind” is fascinating, but it again makes a philosophical error that has become characteristic of DeepMind — exactly the same error I discussed four weeks ago, in an arXiv paper evaluating AlphaGo [https://arxiv.org/abs/1801.05667]. DeepMind’s AlphaGo paper claimed to build a Go expert “without human knowledge” but in fact (as reviewed in detail in my AlphaGo arXiv critique) built in very significant parts of its solution, such as a sophisticated algorithm known as Monte Carlo tree search. As I noted, their work was presented with a strong but misleading tilt towards nurture in the classic “nature-nurture debate.”

The current theory of mind paper claims to present a system that “autonomously learns how to model other agents in its world.” In fact, in close parallel with the Go work, DeepMind has quietly built in a wealth of prior assumptions, including the fact that other agents exist, the fact that different agents can have different beliefs, and the fact that those agents’ beliefs can be false (represented by a particular output node in the system) — which is to say they built in the very core of theory of mind. For good measure, they built in three distinct representational planes for distinguish walls, objects, and agents. All of this is buried in the text and appendices, none forthrightly acknowledged in the abstract or general discussion.

Importantly, all the bits of innate knowlege that DeepMind built in this time are different bits of innate structure from those used in the Go systems. AlphaGo relied on Monte Carlo tree search, ToMNet (theory of mind net) relies on a modular structure that separates knowledge of an agent’s properties from an analysis of individual agent’s characteristics, and so forth.

This profusion of unacknowledged nativism is exactly as I anticipated in my January 17 article (and also in my earler January 2 critique of deep learning):

And the further one goes from straightforward games, the more one may need to enrich the set of primitives. Ultimately, it seems likely that many different types of tasks will have their own innate requirements: Monte Carlo tree search for board games, syntactic tree manipulation operations for language understanding, geometric primitives for 3-D scene understanding, theory of mind for problems demanding social coalitions, and so forth.

Taken together, the full set of primitives may look less like a tabula rasa and more like the spatiotemporal manifold that Immanuel Kant (1781) envisioned, or like the sort of things that strong nativists like myself, Noam Chomsky, Elizabeth Spelke, Steve Pinker and the late Jerry Fodor have envisioned.

It is only once the right bits of innate structure are incorporated by programmers that these sorts of systems can succeed. Calling systems “learning systems” without acknowledging their innate contributions continues to mislead.

It is my hope that the AI field will begin to consider the need for innate machinery in a more principled way.

--

--

Gary Marcus

CEO & Founder Robust.AI; co-author (with Ernest Davis) Rebooting.AI. Also proud dad, Founder of Geometric Intelligence, acquired by Uber, & Emeritus Prof., NYU.