Machine learning, and principally deep learning, is an area of intense interest in computer science today. Tech giants including (but certainly not limited to) Google, Facebook, Baidu, IBM, Microsoft are spending an enormous amount of money and effort to hire the best machine learning researchers.
Deep learning has outperformed traditional computer vision (CV) technology in recent years. In the 2010 ImageNet Challenge, the best traditional CV algorithm had an error rate of 28.2% which meant that it got about 72 out of 100 images correct. In 2011, the best algorithm clocked in at 25.8% error rate. In 2012, however, the first deep learning process came into the competition and blew out all traditional methods with a 16.4% error rate.
After the first deep learning algorithm opened the doors, the competition was flooded with other deep learning teams who achieved 11.7% error rate in 2013 and 6.7% error rate in 2014. The 2014 winner recognized 93 out of 100 images correctly, a huge leap forward compared to the 72 out of 100 performance by the best traditional CV in 2010.
This is a Renaissance period for deep learning. While machine learning has been a concept since the 1960s and deep learning has been discussed in the 1980s, today the combination of data availability and processing power has brought the area to an apex.
At a panel this week at Stanford, five pioneers in the industry talked about the past, present, and future of machine learning in front of a packed house audience of 600 people.
Today, deep learning is still just in its infancy. Steve Jurvetson (@dfjsteve), partner at Draper, Fisher, Jurvetson, moderated the panel and argued that big data isn’t exciting, the ability to do something with the data is. Machine learning is that ability. The method is applied on a large neural network of computers.
Even despite the impressive achievements, the most advanced neural networks in the world today pales in comparison to the human brain (see chart). To continue to build up the hardware capabilities to handle deep learning, Google is rumored to be working towards creating a 1 trillion node network.
But not all deep learning is created equal. Elliot Turner (@eturner303), CEO of AlchemyAPI, points out that in order to create the most accurate and useful algorithms, you need more than just sheer amounts of data. The types of data you use is important.
As a real world example, in an image recognition algorithm, the system that was given a set of faces that closely resemble each other was a better trained system compared to another system given much more data but of a diverse array of faces. The closely resembling images made the system much more accurate because it had to learn to be more granular in order to tell the difference between similarly looking people. After a certain threshold of data input, the key isn’t to feed a system much more data, but carefully select the types of data you use.
Over time, researchers and entrepreneurs will refine their methods of using deep learning to develop even more accurate and efficient ways to train a system.
Naveen Rao, CEO of Nervana Systems, is working on the hardware that can power deep learning. He draws some of his inspiration from biology. Silicon as a material has very different properties than the goo in our brains so why would neural networks continue to move closer to biology?
While it’s very difficult to predict the evolution of the computing systems that will support deep learning, Naveen thinks that these systems could be very brain-like because that’s what humans want. And humans are the ones building these machines. For now.
Ilya Sutskever (@ilyasut), a researcher on the Google Brain project, is optimistic about the future. He is excited to work on machine learning which solves very comprehensive problem in a neat way. His advice to students is to get down to the nuts and bolts learn to be an expert as machine learning is a highly technical and perfection driven field.
Machine learning has already made an impact in speech recognition, image recognition, prediction and anomaly detection, among other things (see chart).
But imagine a future of more ubiquitous and accurate machine learning. Adam Berenzweig, CTO of Clarifai, is working on better image recognition through deep learning. He images a future where your smart fridge will know when the milk is empty simply by snapping a few images of what’s in your fridge. The fridge will recognized spoiled food based on its shape and colors and alert the owner. If you want to buy a pair of shoes, just take a picture and you’ll be directed to the right Amazon page to purchase it. The possibilities are endless and magical.
The future of deep learning is exciting and bright. And we are just getting started.
“[Deep learning] scales beautifully. Basically you just need to keep making it bigger and faster, and it will get better.” — Geoffrey Hinton
If you found value in this article, it would mean a lot to me if you hit the recommend button.
I would love to hear from you @gsvpioneers.