[Week 4–5 Real Estate Price Estimation]

Ufuk Baran Karakaya
bbm406f18
Published in
3 min readDec 30, 2018

After the progress report, according to studies, Transfer Learning approach has a lot of benefits in Machine Learning Projects. This blog post covers the topics such as definition of Transfer Learning, advantages and different methods to apply it.

What is the Transfer Learning

In the fourth week, transfer learning was searched. According to surveys, Transfer Learning can provide some advantages from different parts of the models which are used. Actually, Transfer Learning is not an algorithm or technique, it is a design methodology to increase the accuracy of models.

In most of Machine learning models, the primary target is to generalize to unseen data based on patterns learned from the training data. With transfer learning, we can dominate this generalization process by starting from patterns that have been learned for a different task. Essentially, instead of starting the learning process from a blank sheet, it can be started with patterns that have been learned to solve a different task.

source: https://medium.com/@vinayakvarrier/significance-of-transfer-learning-in-the-domain-space-of-artificial-intelligence-1ebd7a1298f2

Advantage of the Transfer Learning

Its main advantages are primarily that it can be saved on training time. Particularly, the Neural Network performs better in most cases.Usually, it can be needed a lot of data to give Neural Network training from scratch, but there don’t always have enough data. In this situation, using another model which is created for a different task can provide a more robust structure.

Approaches to Transfer Learning

  • Training a Model to Reuse it
  • Using a Pre-Trained Model
  • Feature Extraction

1) Training a Model to Reuse

If we want to solve a task but we do not have enough data to train a deep neural network. One way to solve this problem would be to find a related another task, in which we have a larger data.
Thus, it can be possible to train a deep neural network in the second task and use this model as a starting point to solve the initial task. Whether we need to use the whole model or just a few layers depends a lot on the problem we are trying to solve.
If we have the same structure in both tasks, we could simply reuse the model and make predictions for the new entry. Alternatively, it is also possible to retrain different levels specific to the task and the level of output.

2) Using a Pre-Trained Model

As the second approach is being used a pre-trained model. According to studies and pre-projects which are developed for data prediction, different models might follow different distribution in data. Therefore, using a pre-trained model produces a more robust structure for prediction and reduces the cost of the program because it will use eliminated data.

3) Feature Extraction

source: https://towardsdatascience.com/transfer-learning-946518f95666

Another approach is to use Deep Learning to discover the best representation for the task. It means finding the most important features. This approach is also known as representational learning and can often result in much better performance than can be obtained with a hand-designed representation.

Most of Machine Learning techniques, the features are elaborated manually by researchers and domain experts. Fortunately, Deep Learning can extract features automatically. But neural networks have the ability to learn which features, which you have put into them, are really important and which are not. A representation learning algorithm can discover a good combination of characteristics within a very short period of time, even for complex tasks that would otherwise require a lot of human effort.

Summary

Consequently, Transfer Learning Algorithm might increase the accuracy of Boosting Models and Neural Networks. Training time and size of dataset are decreased by TF.

References

--

--