Week that went by: Thoughts on the big AI announcements from last week
Wednesday was a big day for AI-related news. No fewer than five high-profile announcements from major companies. A lot to sort through, but gives an interesting snapshot of where things are and where they may be going. Strong evidence that AI in its current form of deep learning is maturing from lab experiments into an industry.
Element AI announced that it has raised $102M to develop an incubator for AI-related startups that will provide access to infrastructure and a critical mass necessary to build businesses — something that small groups of engineers might not otherwise have. They hope to support up to 50 startups and provide an alternative to the rapid consolidation of all things AI into the largest tech companies. Incubators such as Element AI might provide small companies an opportunity to actually take products to market.
Snapchat’s announcement revealed that they have spent considerable effort on improving efficiency to enable phones to do deep-learning tasks without help from the cloud. There is a big difference in the computer resources needed for training and actually using deep-learning networks. Training might take many passes through datasets of millions of images. However, the useful result of this training (aka inference) might require processing only a single image. So it is reasonable to consider that phone apps might make use of deep learning. Nonetheless, deep learning networks take up a lot of memory and the number of arithmetic operations required for inference is proportional to the size of the network. So optimizations such as those from Snapchat are critically important for mobile deployment. Now we can expect to see more mobile apps where recognition of objects is front and center.
“Now we can expect to see more mobile apps where recognition of objects is front and center.”
Facebook has developed a chat bot that lies as it negotiates. More and more aspects of human behavior are being captured by deep learning networks. Certainly, this is not new, for there are huge amounts of statistical data collected every day to determine and predict how people respond to various forms of information. Will people pay a particular amount for an airline seat? Or what does your opinion of the latest Batman movie say about your likelihood of going to see a chick-flick, or even what you eat or what car you might buy? Being able to predict human behavior statistically and thus mimic this behavior is very different from being able to act intelligently, but in many cases it is a powerful surrogate. The Facebook chat bot shows that if a strategy works, deep learning can capture that strategy, even if it is a bit seedy.
“The Facebook chat bot shows that if a strategy works, deep learning can capture that strategy, even if it is a bit seedy.”
Miss Pacman falls to an AI player from Microsoft, with a perfect score no less. Pacman is a more difficult challenge than many classic games that have been beaten before, because of its many different characters and goals. The team from Microsoft applied deep reinforcement learning (RL) which learns entirely by trial and error. The training involved more than 800 million video frames of the game (At 30 frames a second, that would be a full year, 24 hours per day. That’s a lot of Pacman). Trained by RL, the system uses a series of 150 “agents” each handling the subtasks of avoidance and collecting objects. The innovation here is the way in which individual actions of the agents are brought together to make the overall decisions. They claim such a system might help salespeople decide which customers to serve first to maximize revenue. More likely such an approach would be useful for self-driving cars, where the situation is constantly and rapidly changing, and many different considerations must lead to single actions.
Finally, and perhaps most importantly, Vicarious AI announced a model-based reinforcement learning (RL) system, Schema Networks, that learns its task in a very different manner than deep learning. They point out that “traditional deep RL wins the game but misses the point.” By this they mean that the deep network has learned the right thing to do statistically at any point in time, but it does not necessarily have any underlying working model of what it is doing. The RL systems can learn a workable and efficient strategy for a task, but this strategy may not be robust to even minor changes in the rules. Vicarious’ approach, on the other hand, learns a strategy that models the game’s tasks much more explicitly. When they alter the game, but don’t really change the basics, the Schema Network adapts immediately because the game hasn’t really changed that much and uses what it knows. The RL system based on deep learning pretty much has to relearn to recover playing the game.
Wednesday’s announcements cover a lot of ground. Deep learning is a great tool that can be used in lots of different applications and is steadily maturing as an industry. Deep learning can reproduce, at least statistically, many forms of human behavior, good and bad. But the current trends of equating deep learning with AI, or confusing a set of behaviors with actual intelligence, already seem to be ill-advised.
Deep learning is a great tool that can be used in lots of different applications and is steadily maturing as an industry. Deep learning can reproduce, at least statistically, many forms of human behavior, good and bad.
Originally published at www.madstreetden.com
About the author: Costa Colbert is Senior VP and Chief Scientist at Mad Street Den Labs. He leads MSD’s charter alongside the CTO in building future neural network architectures that can enable more generalizable models of intelligence.