NIPS 2017, Day 4 (orals + symposium)

Dmytro Mishkin
Dec 10, 2017 · 4 min read

Day 4 was rich for think-provoking talks instead of some short-term-useful papers.

First part of the day I was at Neuroscience track. It was quite refreshing in that sense, that Machine learning there is used for modeling, instead of solving some practical tasks. “Better statistics”, if you want.

Model based …” talk presented a problem of “how to know, which brain neuron connected to which”, and solution which includes laser stimulation, measuring fluorescence as outcome and processing the results with quite complicated model. May be after finishing a PhD I consider such application of my ML-skills instead of building computer vision algorithms.

Model-based Bayesian inference of neural activity and connectivity from all-optical interrogation of a neural circuit

Shape and Material from Sound” and “Scene Physics Acquisition via Visual De-animation” both from Josh Tenenbaum group presented nice way of teaching CNN some intuition about physics of world

Then I went to Symposium, the most vague and abstract topic: “Kinds of intelligence”.

Lucia Jacobs talked about types of navigation systems, which animals use: scentic, the most ancient, and visual. From other side: detection-based (I feel prey there) vs. prediction based (Rabbit will jump there) and their implications. She argued a lot of brain is about navigation in some environment structure and studying evolution of natural navigation systems could help in AI-creation work.

Alison Gopnik spoke about child brain and that is much more relevant for ML than adult brain. Slides are here

Alison Gopnik quotes Turing
Frequent terms in children cognitive science are much closer to ML, than in adult cogscience

Several take-aways:

  1. More intelligent creature is — the longer childhood it haves. It is somehow necessary for future life.
  2. Children are on “exploration” side of exploration-exploitation trade-off. “Bugs” of children behavior are features for learning.

3. Children generate more complex and non-trivial hypothesis about underlying process than adults are.

Children do “high-temperature search”

Key take-away: give your children safe and loving childhood :)

Demis Hassabis presented DeepMind philosophy and AlphaZero algorithm.

Principles are: learning, generic, grounded, general, active

“Grounded-vs-logic-based” is the most interesting and surprising to me dichotomy. Never thought about it.

AlphaZero takeways are: resulting system is long-term based, no concept of materiality, flexible and patience. Good advices for life, actually.

Next talk was from Gary Marcus and he devoted the whole talk to pointing why AlphaZero is not Zero, e.g. MonteCarlo tree search is quite improtant hand-crafted concept, and why we are far from solving AI. Also recommended Jerry Fodor books

Cognition is function of alknowledge, experience, algorithms

The rest of talks I half-missed. The most important take away from them is:

Beliefs and values can be infered from actions person take. Children are quite good at it, although it is possible for machine learning as well.

That is roughly all for me :)
Day 1 is here, days 2–3 TBD

Dmytro Mishkin

Written by

Computer Vision researcher and PhD student in Prague. Co-founder of Ukrainian Research group “Szkocka” and Eastern European Computer Vision Conference

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade