I attended NIPS (Neural Information Processing Systems) last week, one of the big annual machine intelligence conferences. About 3,700 attendees were sustained from 9am to midnight on a diet of maths, statistics, computing and neuroscience. Every day for a week. It was somewhat overwhelming.
Favourite presented paper
I. Mordatch, K. Lowrey, G. Andrew, Z. Popovic, and E. Todorov, “Interactive control of diverse complex characters with neural networks,” in Advances in Neural Information Processing Systems (NIPS), 2015.
This was fun. The video below shows the results of training a recurrent neural net to be able to automatically synthesise complex movements, such as swimming or flying. The applications are in robotics, biomechanics and graphics (automate animation in films or interactive games).
We were also treated to a video of a humanoid robot leaning and reaching, in a remarkably natural way, which I think is associated with this, more recent, paper:
I. Mordatch, N. Mishra, C. Eppner, P. Abbeel, “Combining Model-Based Policy Search with Online Model Learning for Control of Physical Humanoids”, Preprint, 2016
There was no sensor input of contact forces included yet, which is needed for locomotion, so that will be the focus of new work.
Favourite invited talk
A fascinating survey of experiments testing the premise that psychiatric diseases are disorders of brain dynamics, and may be treated with non-invasive techniques and machine learning to restore the brain back to a healthy state.
It seems incredible, but apparently it is possible to control/change cognitive functions without human conscious awareness and these changes are relatively long-lasting. Such techniques can be used to treat e.g. depression, PTSD and chronic pain.
With great power comes great responsibility…
NIPS 2015 Symposium - Brains, Mind, and Machines
Today's science, tomorrow's engineering: We will be discussing current results in the scientific understanding of…
This symposium was absolutely excellent. It explored General AI and the brain and how the two fields might (continue to) learn from one another.
There may be other ways to build intelligence. But the number of solutions may be vanishingly small compared to the size of search space. So it might be most efficient to reengineer the brain — something we know works.
Demis Hassabis, Google DeepMind
The symposium concluded with a panel session which debated what skills were required to enter / contribute to the space. The panel agreed that one must have a very solid maths and coding base to build on, must develop expertise, but that one also would need to encompass neuroscience, psychology and biology. And that this would all have to be learnt whilst young (meaning postgrad is too late), because it’s too hard to learn later. So there you have it. Broad, deep, and best accomplished young. Most dispiriting for this 36 year old!
- I added about 20 new companies to a list I’m keeping of machine intelligence startups. That’s 20 that were not just new to me (I added about 50 overall), but new, new. Hardly a website between them, and no Crunchbase or similar profiles. Launched to coincide with NIPS.
- There was a lot of recruitment going on. Aside from the usual suspects, 10 of the sponsors were financial companies, Uber targeted NIPS attendees hoping to get a taxi, and there was guerrilla recruitment via conference noticeboard.
- In a conference that was largely a hymn to the GPU (despite constraints of memory and cost), it was noteworthy when a result was presented that didn’t use one/several:
- Scanning the audience every day there was a sea of MacBooks, I’d guess 95%. I’ve no axe to grind, just thought it was quite interesting that the preference among attendees seemed so pronounced.
The launch of OpenAI (and to a lesser extent Microsoft Research’s results on ImageNet) last week, rather overshadowed news from the conference itself, in the sense that there didn’t appear (to me) to be one stand-out result. In a way, deep learning has been a victim of its own success. We have all grown habituated to its ‘unreasonable effectiveness’ as applied to numerous data sets and problems. Yet it’s only 3 years since Krizhevsky, Sutskever, and Hinton presented “ImageNet Classification with Deep Convolutional Neural Networks” at the same conference.
Great progress is being made with developments in reinforcement learning, attention models, memory and multi-modal learning (as well as non deep learning methods of course, what Vapnik called ‘intelligent learning’ in his invited talk), but perhaps we have unreasonable expectations. This might explain why the Quantum Machine Learning workshop attracted such a big crowd. Is this where the next breakthrough is going to come from?
“Deep Learning is brute force learning, but it is not intelligent learning”
Vladimir Vapnik, Facebook