Three reasons deep learning rocked 2015

Plus three predictions for 2016

David J Klein
Machine Intelligence Report
5 min readJan 5, 2016

--

DNNResearch’s Alexnet (2012) is to Nirvana’s Nevermind (1991), as deep learning in 2015 is to grunge rock in 1994. In a span of a few years, deep learning (DL) has gone from crashing the party to hosting it.

That DL rocked software in 2015 has been well documented. The giants of tech snapped up talent and produced a slew of mainstream apps like Moments and Translator. At Google alone, more than 2,000 projects and 70 teams featured DL, producing such 2015 gems as SmartReply, Photos, and revamped voice search & text search. Deepnet-paloozas such as ICLR and NIPS saw explosions in attendance. The keynotes at NVIDIA’s tech conference all focused on DL. Funding reached new heights via startups like Clarifai, Enlitic, and Nervana. It was breathtaking and fun.

Over the year, I read — ok, scanned — over 1,000 new articles on advancements and investments in DL. In my attempt to summarize what emerged in 2015, I noticed a few dominant themes.

1. DeepDream and creative (sur)realism

What became DeepDream produced such reverberations that non-technical friends were sharing its fruits with no mention of the underlying tech. It even spawned costumes for Halloween and a new species: The puppyslug.

Album cover artists could take creative input from such generative models, if only people still cared about album covers. Image from Alec Radford http://bit.ly/1ZKQS0a

This inceptionism was part of a big trend of generative algorithms that learn from data to produce creative and useful variations of reality. It was usually in the form of images — of faces, interiors, and stylized paintings — but also seen in music, storytelling, written characters, and 3D models & graphics.

Also in 2015, Artomatix won NVIDIA’s startup challenge, DeepMind released DRAW, Facebook Eyescream, U. Tübingen style transfer, and U. Toronto generating images from novel captions. GANs were everywhere.

Learning how to decompose data into meaningful compositional factors is captivating and important. There are obvious applications in digital arts, game design, 3D design tools, and video & audio compression. Coupled with physical synthesis technology like nanoscale 3D printing and genome editing, such techniques should eventually lead to the manifestation of new, useful materials, medicines, and organisms. Pet puppyslugs, anyone?

2. Autopilot and human-machine collaboration

In matching outfits, NVIDIA CEO Jen-Hsun Huang (left) and Tesla CEO Elon Musk discuss deep learning and autonomous driving. Every keynote at GTC was focused on deep learning. Image from http://bit.ly/1FK9DZ1

Google’s hesitant self-driving cars are a common sight here in the Valley, but in 2015 Tesla put something kick-ass and slightly dangerous right into the crowd’s hands. Autopilot: A collaborative learning system, designed to both learn from drivers and to help them drive while they find some good music.

Such human-in-the-loop learning systems became a common reprise from researchers to investors, who finally got that simply incanting “deep learning” over a pile of data won’t get you far in the real world. Interactive systems afford faster learning and yield copious and meaningful data, which increases in value as the learning algorithms continue to improve.

Along these lines, 2015 also brought robotics startup Osaro, interactive approaches for image annotation (producing sweet datasets including visual genome and LSUN), and methods to adaptively leverage crowdsourced input as a machine progressively assumes responsibility. Intelligent systems should be designed to be interactive and interpretable, leading the human-machine hybrid to ever-advancing understanding and capability.

3. OpenAI and the start of the Open Era

Couldn’t resist. Art: Sean Tubrity

Grunge bloomed in Seattle because of the mutual inspiration and collaboration between bands in the local scene. Similarly, deep learning owes its rapid rise to the unprecedented openness of its researchers, many of whom work in large corporations.

Open software toolkits for DL were already numerous, but 2015 incredibly saw Facebook, Google, Microsoft, Samsung, and NVIDIA open-source their libraries, along with accomplished startup Nervana, and popular Keras. The toolkits now number at least fifty. Facebook and NVIDIA even shared their DL hardware designs.

Despite openness, corporate researchers could still be influenced by their companies’ profit motives. To address this danger that grows with the power the AI that they aspire to build, Elon Musk, Sam Altman, and others committed $1 billion in founding the non-profit OpenAI. With their funding, strong research team, and mission of openness and benevolence, OpenAI should at least prove to be a formidable recruiter for DL R&D talent. Some additional thoughts and questions about OpenAI can be found here.

Anticipated themes for 2016

Predictions are mostly wrong, useless, or boring. Yes, of course, much of the current R&D will continue and advance — but what unexpected, game-changing developments might there be in 2016?

  1. Deep learning in simulated reality will markedly increase the pace of research, by reducing the real-world data needed to be collected and labeled. Gaming and GPUs have greatly advanced simulated physics in recent years. Related to data augmentation, learning-in-simulation is in research for robotics and computer vision, as are methods of transferring simulated learning to the real world. This could be coupled with VR for human-machine collaboration in simulated reality.
  2. The life sciences — likely genomics or neuroscience—will produce impactful new findings fueled by deep learning. New companies like Deep Genomics and Atomwise, and recent research in BCI are emerging in this intersection area. Large, data-rich companies like Google and Illumina are jumping on board, and open data is ripe for impact.
  3. President Obama will make a statement, order, or initiative involving deep learning. The White House OSTP was making the DL rounds in 2015, and D.J. Patil joined staff. They are aware of DL’s potential for social issues, like clean energy, healthcare, smart cities, agriculture, and biodiversity conservation (not to mention surveillance). They’ve been nudging exascale computing forward. OpenAI is a catalyst. While witnessing the Governor General of Canada praising research in AI, I realized Obama shouldn’t leave office before imparting his vision of the public utility of this critical research.

Follow me on Twitter | Connect on LinkedIn

--

--

David J Klein
Machine Intelligence Report

Trading in creativity, deep learning, neuroscience, conservation & music. Creating solutions fueled by intelligent learning & sensing machines.