Gary Marcus’s Deep Learning Critique Triggers Backlash

Synced
Synced
Jan 7, 2018 · 4 min read
Image for post
Image for post

It took a mere 72 hours for deep learning researchers to ignite the first AI Twitter debate of 2018.

On January 2, NYU Professor and Founder of Uber-owned machine learning startup Geometric Intelligence Gary Marcus published the paper Deep Learning: A Critical Appraisal on ArXiv. The paper shortlisted problems that are keeping current research from actualizing artificial general intelligence.

Marcus’s central argument was that present deep learning systems have failed in inference beyond specific datasets they have seen. He listed ten challenges facing deep learning research, such as data hungriness, lack of transparency, inability to extrapolate, and difficult to engineer.

Marcus said his greatest fear is that AI will get pigeonholed as a “local minimum, focusing too much on the detailed exploration of a particular class of accessible but limited models,” and forget its mission to march towards artificial general intelligence. He then raised possibilities for a future beyond deep learning: focus more on unsupervised learning; symbol-manipulation (aka GOFAI, the Good Old-Fashioned AI); derive insights from cognitive and developmental psychology; or focus more on acquiring common sense knowledge, scientific reasoning, game playing, etc.

A day later, former AAAI Co-chair and NIPS Chair Thomas G. Dietterich countered Gary Marcus’ article with no less than 10 tweets, calling it a “disappointing article… DL learns representations as well as mappings. Deep machine translation reads the source sentence, represents it in memory, then generates the output sentence. It works better than anything GOFAI ever produced.”

Dietterich added that “DL is essentially a new style of programming — differentiable programming — and the field is trying to work out the reusable constructs in this style. We have some: convolution, pooling, LSTM, GAN, VAE, memory units, routing units, etc.”

Long-time deep learning advocate and Facebook Director of AI Research Yann LeCun backed Dietterich’s counter-arguments: “Tom is exactly right.” In a response to MIT Tech Review Editor Jason Pontin and Gary Marcus, LeCun testily suggested that the later might have mixed up “deep learning” and “unsupervised learning”, and said Marcus’s valuable recommendations totalled “exactly zero.”

Image for post
Image for post
Image for post
Image for post

Marcus and LeCun have a history, they squared off in a New York University debate last October. Marcus is an advocate for deep learning integrating with human cognitive sciences, whereas LeCun is not thrilled by that possibility. You can watch the NYU debate here: https://www.youtube.com/watch?v=aCCotxqxFsk

Some Reddit users argued that Marcus had ignored technical details and recent advancements such as GANs, zero-shot and few-shot deep learning methods. Redditor Gwren commented, “if anything, I came out more convinced DL is the future, if that is the best the critics can do…”

So is deep learning hitting a wall as argued in Marcus’ paper? Researchers have raised questions: last year Turing Award winner and University of California Los Angeles Professor Judea Pearl’s paper Theoretical Impediments to Machine Learning With Seven Sparks from the Causal Revolution argued that human-level AI cannot emerge from model-blind learning machines that ignore causal relationships. He brought the discussion to NIPS in December.

At an AI conference in Montreal last October the deep learning trio of Geoff Hinton, Yann LeCun, and Yoshua Bengio agreed that deep learning research is no longer on the fast track. And this raises the questions: have we hit a wall? Opinions bifurcate, but as we roll into 2018 one thing appears certain: this won’t be the last AI Twitter debate of the year.

Journalist: Meghan Han | Editor: Michael Sarazen

SyncedReview

We produce professional, authoritative, and…

Synced

Written by

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Synced

Written by

Synced

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global

SyncedReview

We produce professional, authoritative, and thought-provoking content relating to artificial intelligence, machine intelligence, emerging technologies and industrial insights.

Medium is an open platform where 170 million readers come to find insightful and dynamic thinking. Here, expert and undiscovered voices alike dive into the heart of any topic and bring new ideas to the surface. Learn more

Follow the writers, publications, and topics that matter to you, and you’ll see them on your homepage and in your inbox. Explore

If you have a story to tell, knowledge to share, or a perspective to offer — welcome home. It’s easy and free to post your thinking on any topic. Write on Medium

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store