Day 4 was rich for think-provoking talks instead of some short-term-useful papers.
First part of the day I was at Neuroscience track. It was quite refreshing in that sense, that Machine learning there is used for modeling, instead of solving some practical tasks. “Better statistics”, if you want.
“Model based …” talk presented a problem of “how to know, which brain neuron connected to which”, and solution which includes laser stimulation, measuring fluorescence as outcome and processing the results with quite complicated model. May be after finishing a PhD I consider such application of my ML-skills instead of building computer vision algorithms.
Then I went to Symposium, the most vague and abstract topic: “Kinds of intelligence”.
Lucia Jacobs talked about types of navigation systems, which animals use: scentic, the most ancient, and visual. From other side: detection-based (I feel prey there) vs. prediction based (Rabbit will jump there) and their implications. She argued a lot of brain is about navigation in some environment structure and studying evolution of natural navigation systems could help in AI-creation work.
- More intelligent creature is — the longer childhood it haves. It is somehow necessary for future life.
- Children are on “exploration” side of exploration-exploitation trade-off. “Bugs” of children behavior are features for learning.
3. Children generate more complex and non-trivial hypothesis about underlying process than adults are.
Key take-away: give your children safe and loving childhood :)
Demis Hassabis presented DeepMind philosophy and AlphaZero algorithm.
Principles are: learning, generic, grounded, general, active
“Grounded-vs-logic-based” is the most interesting and surprising to me dichotomy. Never thought about it.
AlphaZero takeways are: resulting system is long-term based, no concept of materiality, flexible and patience. Good advices for life, actually.
Next talk was from Gary Marcus and he devoted the whole talk to pointing why AlphaZero is not Zero, e.g. MonteCarlo tree search is quite improtant hand-crafted concept, and why we are far from solving AI. Also recommended Jerry Fodor books
The rest of talks I half-missed. The most important take away from them is:
Beliefs and values can be infered from actions person take. Children are quite good at it, although it is possible for machine learning as well.
That is roughly all for me :)
Day 1 is here, days 2–3 TBD