Activation Functions for Deep Learning Machine Learning
Updated July2023 This is your activation function cheatsheet for deep learning. Clap if you like it! Thanks.Your applause means the world to us. Practical usage of activation function is not hard. You just have to remember a few important details — rule of thumb and tricks. Activation function is the main way to introduce non-linearity in machine learning model, which can be otherwise linear combinations, linear transformations and thus unable to account for complex non-linear patterns in data.
- What is the input
- What is the output
- What is the range of the potential output? Is it between [-1, 1] or [0,1] for example.
- What does the activation function look like if it is graphed?
- What does the activation function derivative function look like if it is graphed? (It can hint at a vanishing gradient problem.)
- The trick is to graph the activation function if it is hard to understand.
Sigmoid, ReLU, Softmax are the three famous activation functions used in Deep Learning and Machine Learning. They are actually very easy to understand. This article introduces the tuition behind using each of the three activation functions.
In general activation function lends non-linearity to our model. You can see none of the activation function graphs is linear. You will see in this article, activation functions are really easy to understand for beginners. It can…