So, in a nutshell transfer learning means adding a few layers to a trained model?
Paul-Louis Pröve

Absolutely. It’s been called this or Inductive Transfer.

The issue is that most people in the field know that it’s being used. It allows you train models with less data. People outside would like to use DL but can’t because they think it takes too much data to train large models. Popularizing the concept of transfer learning should lend to debunking this myth.

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.