Image for post
Image for post

Day after day we see startling and quickly rising numbers of confirmed COVID-19 infections. They’re scary. But the true scope of the pandemic is almost certainly scarier.

Confirmed cases represent only a fraction of the real spread. In most communities, only the sickest patients are being tested so most people with mild symptoms and those that are asymptomatic go untested and unreported. The real number of people who have been infected by the virus almost certainly dwarfs the cases we know about.

And the public winds up with a distorted picture of how prevalent the virus is, often tragically.

Take the Skagit Valley Chorale that met on March 6th in Washington State. The virus was already killing people an hour away in Seattle but, because there were no reported cases at that time in Skagit county, and there were no estimates of unreported cases publicly available, choir members were convinced it was okay to go ahead with a scheduled rehearsal. …


Throughout much of the world, politicians have been slow to react to COVID-19, often starting with half measures, and typically getting to the drastic measures we really need too late, after the disease has spread to hundreds or thousands of people. Far too many leaders have taken a gradualist approach, either failing to appreciate the terrifying power of exponentially increasing spread until too late or unwilling to act for fear of the economic and political repercussions. Almost no nation has enough protective gear, enough beds, or enough tests.

In a perfect world, we might be able to follow South Korea and Taiwan’s approach of extensive monitoring, with some degree of freedom to move around, but we are not in a perfect world. That approach would require widespread testing and almost no other country — yet — has enough tests to follow it. Despite plenty of advance warning from China that ramping up production of tests and processing capabilities was warranted, almost no country is in a position to safely follow South Korea and Taiwan’s lead. …


A Dialogue between Yoshua Bengio and Gary Marcus

January 1, 2020

GM: Thanks for your last note, Yoshua, giving your definition of deep learning. I think you have your finger on something, and I definitely learned something from our conversation.

Whereas I am looking for a term that describes and analyzes the HOW of current research, you are really trying to characterize the GOAL of a research program. Those both seem incredibly worthwhile.

More broadly, I’m all for you defining your own terms. And you are right that much of the community is working on the research program you describe. You are also certainly right that the field has gathered many amazing priors already, and just as correct in observing that more needs to be done. …


Dear Yoshua,

Thank you for your speedy response to my Medium post on defining deep learning. Although your reply is strongly worded, you inadvertently confirmed my article’s central point: a distinction between deep-learning-as-open-ended-methodology and what I called core deep learning is desperately needed.

To refresh your memory, I defined core deep learning as follows:

let’s call the central set of techniques that characterized early deep learning (and in fact the great majority of what has been published so far) — multliayer perceptrons, convolutional nets, and so forth — core deep learning.

Most of your reply talks about deep-learning-as-open-ended-methodology, essentially defining deep learning in terms of an ongoing research program, quite apart from the techniques that made that research program well-known. I am glad that you have clarified that this is your preferred sense of the word going forward; that is your undisputed right, and clarity about how you intend the term can only help. …


important update, December 29, 2019: the piece below apparently led some people to think I was challenging Yoshua’s integrity.

That was NOT my intent; I sincerely think his work is terrific & I admire him as a person of values and integrity.

We differ on some important conceptual questions; I try below to clarify. I am sorry that I was not clearer.

On December 23, 2019, Yoshua Bengio and I debated the past and future of AI. Several thousand people tuned in, and tens of thousands watched afterward; ZDNet described it as a “historic event”. Some people loved it, some hated it; I wished we had had more time. …


The current state of AI and Deep Learning: A reply to Yoshua Bengio

Dear Yoshua,

Thanks for your note on Facebook, which I reprint below, followed by some thoughts of my own. I appreciate your taking the time to consider these issues.

Image for post
Image for post

I concur that you and I agree more than we disagree, and as you do, I share your implicit hope that field might benefit from an articulation of both our agreements and our disagreements.

Agreements

  • Deep learning on its own, as it has been practiced, is a valuable tool, but not enough on its own in its current form to get us to general intelligence. …


Some reflections on an accidental Twitterstorm, the future of AI and deep learning, and what happens when you confuse a schoolbus with a snow plow.

On November 21, I read an interview with Yoshua Bengio in Technology Review that to a suprising degree downplayed recent successes in deep learning, emphasizing instead some other important problems in AI might require important extensions to what deep learning is currently able to do. In particular, Bengio told Technology Review that,

I think we need to consider the hard challenges of AI and not be satisfied with short-term, incremental advances. I’m not saying I want to forget deep learning. On the contrary, I want to build on it. …


Image for post
Image for post

The Past

Researchers, a lot of them, have worried for a long time, about whether neural networks could generalize effectively enough to capture the richness of language. It’s been a major theme of my work, since the the 1990s, and before me Fodor and Pylyshyn and Pinker and Prince in 1988 in Cognition made closely related points. Brenden Lake and his collaborators have made similar points, earlier this year.

To take but one example, here’s something I wrote on the topic in January:

Deep learning systems work less well when there are limited amounts of training data available, or when the test set differs importantly from the training set, or when the space of examples is broad and filled with novelty. And some problems cannot, given real- world limitations, be thought of as classification problems at all. Open-ended natural language understanding, for example, should not be thought of as a classifier mapping between a large finite set of sentences and large, finite set of sentences, but rather a mapping between a potentially infinite range of input sentences and an equally vast array of meanings, many never previously encountered. …


Image for post
Image for post

DeepMind’s new paper on learning a “machine theory of mind” is fascinating, but it again makes a philosophical error that has become characteristic of DeepMind — exactly the same error I discussed four weeks ago, in an arXiv paper evaluating AlphaGo [https://arxiv.org/abs/1801.05667]. DeepMind’s AlphaGo paper claimed to build a Go expert “without human knowledge” but in fact (as reviewed in detail in my AlphaGo arXiv critique) built in very significant parts of its solution, such as a sophisticated algorithm known as Monte Carlo tree search. …


Image for post
Image for post

“All truth passes through three stages: First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as self-evident.”

— Often attributed to Schopenhauer

In a recent appraisal of deep learning (Marcus, 2018) I outlined ten challenges for deep learning, and suggested that deep learning by itself, although useful, was unlikely to lead on its own to artificial general intelligence. I suggested instead the deep learning be viewed “not as a universal solvent, but simply as one tool among many.”

In place of pure deep learning, I called for hybrid models, that would incorporate not just supervised forms of deep learning, but also other techniques as well, such as symbol-manipulation, and unsupervised learning (itself possibly reconceptualized). …

About

Gary Marcus

CEO & Founder Robust.AI; co-author (with Ernest Davis) Rebooting.AI. Also proud dad, Founder of Geometric Intelligence, acquired by Uber, & Emeritus Prof., NYU.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store