Deep Learning Beyond the Image Domain

Cole McCollum
The Startup
Published in
2 min readJan 15, 2018

The caricature of deep learning — it’s great if you want to build a cat classifier and you have a million training examples, but where is its real-world utility?

There’s clearly been a disproportionate amount of ML research done on simple image classification. I suspect this is mostly due to the availability of huge, well-labeled datasets. The opposite also explains why there’s been so little research done on classification for other sensing modalities (barring speech recognition). But what if I wanted to apply deep learning to wearable data or to the sensors on an oil rig? What about non-speech audio sounds or vital monitoring? Well, I’d be hard-pressed to find many useful public datasets.

For deep learning to start proving itself useful in higher value classification problems we need a. better datasets b. better algorithms or c. hacky solutions. While a and b are evergreen, one area that I’m excited by is c, in particular, modality transformation combined with transfer learning.

We could take advantage of the fact that so much research has occurred in image classification — by turning everything into an image problem! It’s inspired by the spectrogram, you turn non-image sensor data into images, label it, and run it through a CNN pre-trained on cat images (or something more relevant to the problem). There have already been some papers published on this topic such as time-series pressure sensor data, but I expect to see this become a trend in 2018 as engineers look to apply deep learning to industry problems.

This story is published in The Startup, Medium’s largest entrepreneurship publication followed by 285,454+ people.

Subscribe to receive our top stories here.

--

--