Guide to L1 and L2 regularization in Deep Learning

Uniqtech
Data Science Bootcamp

--

Alternative Title: understand regularization in minutes for effective deep learning. All about regularization in Deep Learning and AI

Regularizations prevents model overfitting by restricting parameter freedom. This is a beginner friendly regularization formula deep dive. We write beginner friendly tutorials: Softmax, Natural Language Processing, Cross Entropy Loss and GPT-3 model strengths and GPT-3 weakness.

Read the full disclaimer, basically our tutorials are for educational purpose only. We are NOT responsible for any commercial, production use nor do we advise it. All articles are exclusively published on our Medium and subdomains. No repost, no scraping. Thanks.

We will discuss regularization methods used to restrict weights. By restricting the growth of weights, the model becomes easier to compute and better at generalizing. (Regularization puts pressure on weights prevent weights from growing out of control.) By generalizing well, we mean the model should perform well on training data as well as new inputs — unseen data. It practically is the holy grail of machine learning. The senior technical manager we interviewed, states, help algorithms generalize well is a key goal in his work.

There are quite a few regularization methods, we are only covering the two popular ones L1…

--

--