Using focused feature model does not improve your transfer learning
The first time I have heard about feature models I was surprised. You can use a model trained on a general dataset for training on a focused dataset. For instance, you can train a model on medical images using a dataset of chairs, for instance. Basically, the model learns to find features. This is similar to zip the image into important features.
What about if the feature model was trained on a dataset similar to yours?
Theoretically, it would be even better; that what I have thought initially. Sadly, no.
I have been experimenting with transfer learning applied to biological images (e.g., birds). When I finally found out about INaturalist, there is a model trained with images from INaturalist, as a feature model. I have tested, the results are worse than using MobileNet, which is a general model.
It is hard to speculate why. One possible explanation is that feature extraction is a general task. Training on a focused dataset just diminishes the model capability. A bird will be found on several places. Learning to find features on general scenarios may be more useful than focusing the model.
Transfer Learning in Angular
learning to apply transfer learning using TensorFlow.js in TypeScript
https://www.udemy.com/course/transfer-learning-in-angular/?referralCode=0C993F969C6D1AA22418