Interview of Geoffrey Hinton

Zhuoran
4 min readMay 27, 2018

--

Geoffrey Hinton, “God father of Deep Learning”, who builds many of the ideas behind the Deep Learning.

Notes from Cousera: Neural Networks and Deep Learning Week1.

How to get interested in the brains?

When he was in hight school, his classmate told him that the brain uses holograms. In a hologram you can chop off half of it, and you still get the whole picture. And that memories in the brain might be distributed over the whole brain.

How to try to understand the brain?

I started of studying physiology and physics. Then tried to do philosophy.

But it seemed to him that lacking in ways of distinguishing when they are false. So then he switch to psychology.

In psychology the theories seemed too simple to explaining the brain to him, then he decided he’d try AI.

Whats the most excited work?

Boltzmann machines.

Trust your intuitions

For creative researchers read enough so you start developing intuitions. And then, trust your intuitions and go for it, don’t be too worried if everybody else says it’s nonsense. When you have what you think is a good idea and other people think is complete rubbish, that’s the sign of a really good idea.

Never stop programming

The other advice I have is, never stop programming. Because if you give a student something to do, if they’re botching, they’ll come back and say, it didn’t work. And the reason it didn’t work would be some little decision they made, that they didn’t realize is crucial. And if you give it to a good student, like for example. You can give him anything and he’ll come back and say, it worked.

I remember doing this once, and I said, but wait a minute. Since we last talked, I realized it couldn’t possibly work for the following reason. And he said, yeah, I realized that right away, so I assumed you didn’t mean that.

Find an advisor who has similar beliefs

One good piece of advice for new grad students is, see if you can find an advisor who has beliefs similar to yours. Because if you work on stuff that your advisor feels deeply about, you’ll get a lot of good advice and time from your advisor. If you work on stuff your advisor’s not interested in, you get some advice, but it won’t be nearly so useful.

Top Company V.S. Research Group

I think right now there aren’t enough academics trained in deep learning to educate all the people that we need educated in universities. Most departments have been very slow to understand the kind of revolution that’s going on. It’s not quite a second industrial revolution, but it’s something on nearly that scale. And there’s a huge sea change going on, basically because our relationship to computers has changed.

Instead of programming them, we now show them, and they figure it out. That’s a completely different way of using computers, and computer science departments are built around the idea of programming computers. And they don’t understand that sort of this showing computers is going to be as big as programming computers. Except they don’t understand that half the people in the department should be people who get computers to do things by showing them.

And in that situation, you have to remind the big companies to do quite a lot of the training. So Google is now training people, we call brain residence, I suspect the universities will eventually catch up.

Paradigms for AI

In the early days, back in the 50s, people like von Neumann and didn’t believe in symbolic AI, they were far more inspired by the brain. Unfortunately, they both died much too young, and their voice wasn’t heard.

And in the early days of AI, people were completely convinced that the representations you need for intelligence were symbolic expressions of some kind. Not quite logic, but something like logic, and that the essence of intelligence was reasoning.

What’s happened now is, there’s a completely different view, which is that what a thought is just a great big vector of neural activity, so contrast that with a thought being a symbolic expression. And I think the people who thought that thoughts were symbolic expressions just made a huge mistake.

What comes in is a string of words, and what comes out is a string of words. And because of that, strings of words are the obvious way to represent things. So they thought what must be in between was a string of words, or something like a string of words. And I think what’s in between is nothing like a string of words. I think the idea that thoughts must be in some kind of language is as silly as the idea that understanding the layout of a spatial scene must be in pixels, pixels come in, then pixels would come out, but what’s in between isn’t pixels.

And so I think thoughts are just these great big vectors, and that big vectors have causal powers. They cause other big vectors, and that’s utterly unlike the standard AI view that thoughts are symbolic expressions.

--

--