How to Apply L1 and L2 Regularization Techniques to Keras Models

Neural Networks and Deep Learning Course: Part 20

Rukshan Pramoditha
Data Science 365

--

Image by Kranich17 from Pixabay

Prerequisite: Regularization Methods for Neural Networks — Introduction

There are several types of regularization techniques for neural networks. Today, we’ll discuss L1 and L2 regularization techniques and their Keras implementation.

Overview of regularization techniques for neural networks (Image by author, made with draw.io)

Both L1 and L2 regularization techniques fall under the category of weight/parameter regularization. This type of regularization keeps the weights of the neural network small (near zero) by adding a penalizing term to the loss function.

How L1 and L2 regularization work to mitigate overfitting in neural networks

As we already discussed in Part 1, the weights in a neural network determine the level of importance of each input feature on the final output. Large weights put too much emphasis on some inputs that may be important when training the models, but less important when testing the model on new unseen data. That will result in high variance and overfitting in the model.

--

--

Rukshan Pramoditha
Data Science 365

3,000,000+ Views | BSc in Stats | Top 50 Data Science, AI/ML Technical Writer on Medium | Data Science Masterclass: https://datasciencemasterclass.substack.com