DataSeries
Published in

DataSeries

Preventing overfitting: Regularization

The ultimate goal of any Machine Learning model is making reliable predictions on new, unknown data. Hence, while training our algorithm, we always have to keep in mind that having a good score in our train set doesn’t necessarily mean our model will adapt to new data well. Indeed, whenever we have a model which perfectly fit our training data, we are probably incurring in overfitting: our model has too many parameters which cannot be justified by data, hence it is way too complex and heavy.

--

--

--

Imagine the future of data

Recommended from Medium

Implementing a batch size finder in Fastai : how to get a 4x speedup with better generalization !

LOGISTIC REGRESSION

Announcing Lightning 1.5

Machine Learning vs Statistics

An Elementary Introduction to Federated Learning

Five reasons why you should use Machine Learning in Advanced Analytics

The Development of Machine Learning Models For Algorithmic Music Composition.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Valentina Alto

Valentina Alto

Cloud Specialist at @Microsoft | MSc in Data Science | Machine Learning, Statistics and Running enthusiast

More from Medium

The Meaning Behind L1 L2 Regularization

5 Ways to Use Histograms with Machine Learning Algorithms

The modelling dilemma in Machine Learning — Optimization vs Generalization

What is multicollinearity?