Deep Nets Don’t Learn via Memorization
Felix Lau

Nice summary, thanks. I wonder though what makes a dataset “real”? And if we cannot define “real” (I certainly cannot define it!) then what’s the point? More constructively, we probably must work hard to gain a better understanding of what is special about vision, speech, language, or whatever domain we get our data from. It seems to me that while we have certain ways of describing these data sources, they are still insufficient to explain the experimental results we are getting.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.