A Comprehensive Guide to Activation Functions in Deep Learning.

Aarafat Islam
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨
10 min readSep 25, 2023
created by leonardo.ai

“Activation functions are the spark of intelligence in neural networks.”

Activation functions are the lifeblood of neural networks, bestowing them with the remarkable power to model complex relationships within data. They serve as the nonlinear element in neural networks, elevating them beyond linear models and enabling them to tackle intricate patterns. In this comprehensive guide, we will delve deeply into the world of activation functions, unraveling their definitions, exploring their significance, examining a diverse range of types, providing Python implementations, and offering valuable insights into selecting the optimal activation function for your neural network.

What Are Activation Functions?

Activation functions are mathematical operations applied to the outputs of individual neurons in a neural network. These functions introduce nonlinearity, allowing the network to capture intricate patterns and make nonlinear transformations from input to output. Without activation functions, a neural network would be limited to linear mappings, rendering it incapable of representing and learning complex relationships in data.

Why Do We Need Activation Functions?

The necessity for activation functions arises from the inherently nonlinear nature of real-world data. In essence, they provide neural networks with the ability to model complex relationships, uncover hierarchical features, and generalize to unseen data. Moreover, activation functions play a crucial role in gradient-based optimization during training, which is paramount for updating network weights and minimizing loss.

  • Handling Nonlinearity: Activation functions are essential because real-world data often exhibits nonlinear patterns and relationships. Without activation functions, neural networks would be limited to linear transformations, rendering them incapable of capturing these intricate nonlinearities.
  • Modeling Complex Relationships: Activation functions empower neural networks to model complex and intricate relationships within data. They allow networks to learn and represent nonlinear mappings from input to output, enabling them to tackle a wide range of tasks, from image recognition to natural language processing.
  • Uncovering Hierarchical Features: Neural networks comprise multiple layers of interconnected neurons. Activation functions facilitate the discovery of hierarchical features in data. As information passes through each layer, these functions introduce nonlinear transformations that enable the network to recognize progressively abstract and complex patterns.
  • Generalization to Unseen Data: Activation functions contribute to the network’s ability to generalize. By capturing the underlying nonlinearities in the training data, neural networks can make accurate predictions on new, unseen data points. This generalization is essential for the network to perform well in real-world scenarios.
  • Gradient-Based Optimization: During training, neural networks adjust their weights and biases using gradient-based optimization techniques, such as backpropagation. Activation functions play a pivotal role in this process by providing gradients (derivatives) that indicate how much each neuron’s output contributes to the overall error. Without these gradients, optimizing the network’s parameters would be nearly impossible.
  • Avoiding Vanishing and Exploding Gradients: Certain activation functions, like sigmoid and tanh, help mitigate the vanishing gradient problem, where gradients become extremely small during backpropagation. Conversely, activation functions like ReLU can help prevent exploding gradients, where gradients become too large. Properly chosen activation functions contribute to stable and efficient training.

Types of Activation Functions

The universe of activation functions is rich and diverse, each type possessing unique characteristics and applications. Let’s explore a myriad of these activation functions:

1. Sigmoid Activation

The sigmoid activation function, defined as f(x) = 1 / (1 + exp(-x)), compresses input values into a range between 0 and 1. This function is often employed in the output layer of binary classification problems, as it yields outputs that resemble probabilities.

Sigmoid

2. Hyperbolic Tangent (Tanh) Activation

Tanh activation, characterized by f(x) = (exp(x) - exp(-x)) / (exp(x) + exp(-x)), maps input values to the range [-1, 1]. It is frequently used within hidden layers of neural networks and can help alleviate the vanishing gradient problem when compared to sigmoid.

Hyperbolic Tangent (Tanh)

3. Rectified Linear Unit (ReLU) Activation

ReLU, defined as f(x) = max(0, x), stands as one of the most popular activation functions. It introduces sparsity by setting negative values to zero, making it computationally efficient and well-suited for deep networks. However, it is not without its shortcomings, such as the dying ReLU problem.

Rectified Linear Unit (ReLU)

4. Leaky ReLU Activation

Leaky ReLU, an enhancement over standard ReLU, allows a small gradient for negative values. It is defined as f(x) = max(alpha * x, x), where alpha is a small positive constant. Leaky ReLU is a useful choice when confronted with the dying ReLU problem.

Leaky ReLU

5. Exponential Linear Unit (ELU) Activation

ELU, characterized by f(x) = x if x > 0 else alpha * (exp(x) - 1), combines the strengths of ReLU and Leaky ReLU while mitigating the dying ReLU problem. It particularly shines when the network needs to capture both positive and negative values.

Exponential Linear Unit (ELU)

6. Swish Activation

Swish, proposed by Google’s research team, is defined as f(x) = x * sigmoid(x). It combines the advantages of ReLU's computational efficiency with a smoother, non-monotonic behavior, potentially leading to improved performance.

Swish

7. Parametric ReLU (PReLU) Activation

Parametric ReLU, an extension of Leaky ReLU, allows the learning of the alpha parameter instead of setting it manually. It is expressed as f(x) = max(alpha * x, x), with alpha being a learnable parameter.

Parametric ReLU (PReLU)

8. Randomized Leaky ReLU (RReLU) Activation

Randomized Leaky ReLU (RReLU) is a variation of Leaky ReLU where the alpha parameter is randomly selected from a uniform distribution during training. This randomness can act as a regularization technique.

Randomized Leaky ReLU (RReLU)

9. Parametric Exponential Linear Unit (PELU) Activation

Parametric Exponential Linear Unit (PELU) extends ELU by allowing the learning of the alpha parameter. It is defined as f(x) = x if x > 0 else alpha * (exp(x) - 1) with alpha being a learnable parameter.

Parametric Exponential Linear Unit (PELU)

10. Softmax Activation

The softmax activation function is primarily used in the output layer of multi-class classification problems. It converts a vector of real numbers into a probability distribution over multiple classes.

Softmax

11. Softplus Activation

The softplus activation function, defined as f(x) = ln(1 + exp(x)), is a smooth approximation of ReLU. It introduces nonlinearity while ensuring smooth derivatives.

Softplus

12. ArcTan Activation

The arctan activation function, also known as the inverse tangent, is defined as f(x) = atan(x). It squashes input values into the range [-π/2, π/2] and can be useful in certain scenarios.

ArcTan

13. Gaussian Error Linear Unit (GELU) Activation

The GELU activation function, popularized by its use in Transformers, is defined as a smooth approximation of the rectifier function with Gaussian noise. Its formula is complex GELU(x) = 0.5 * x * (1 + tanh(sqrt(2/pi) * (x + 0.044715 * x³))) but can be implemented as follows:

Gaussian Error Linear Unit (GELU)

14. Swish-1 Activation

Swish-1 is a variant of Swish, defined as f(x) = x / (1 + exp(-x)). It introduces a division operation, which can provide different properties compared to standard Swish.

Swish-1

15. Inverse Square Root Linear Unit (ISRLU) Activation

ISRLU, or Inverse Square Root Linear Unit, is an activation function defined as f(x) = x / sqrt(1 + x^2). It is another smooth alternative to ReLU.

Inverse Square Root Linear Unit (ISRLU)

16. Scaled Exponential Linear Unit (SELU) Activation

SELU is an activation function designed to ensure that neural networks automatically normalize their activations, SELU(x) = λ * { x, if x > 0; α * (e^x — 1), if x <= 0 }. It is defined as a piecewise function.

Scaled Exponential Linear Unit (SELU)

17. SoftExponential Activation

The soft exponential activation function is defined as f(x) = ln(1 + alpha * (exp(x) - 1)) / alpha introducing nonlinearity with a learnable parameter alpha.

SoftExponential

18. Bipolar Sigmoid Activation

The bipolar sigmoid activation is a variation of the sigmoid function that maps input values to the range between -1 and 1.

Bipolar Sigmoid

19. Binary Step Activation

Binary step activation is the simplest activation function, with values of either 0 or 1 based on a threshold.

Binary Step

Visualizing Activation Functions

Activation function characteristics visualization
All the functions in a single graph

Choosing the Right Activation Function

Selecting the appropriate activation function is a critical decision in the design of neural networks. The choice should be based on several factors, including the nature of your task, the architecture of your network, and the characteristics of your data. Below, we provide a comprehensive overview of different activation functions and their recommended use cases:

1. Sigmoid: Sigmoid activation is well-suited for binary classification problems where you need outputs that resemble probabilities. It squashes input values into the range between 0 and 1, making it ideal for problems with two distinct classes.

2. Tanh (Hyperbolic Tangent): Tanh is an excellent choice for hidden layers, especially when your input data is centered around zero (mean-zero data). It maps input values to the range [-1, 1], which helps mitigate the vanishing gradient problem and is often preferred in recurrent neural networks (RNNs).

3. ReLU (Rectified Linear Unit): ReLU is a widely used activation function and serves as a good default choice for most situations. It introduces sparsity by setting negative values to zero, making it computationally efficient. However, it may lead to dead neurons during training, so it’s crucial to monitor its performance.

4. Leaky ReLU: Leaky ReLU is a variant of ReLU and is employed when the standard ReLU causes neurons to become inactive. It allows a small gradient for negative values, preventing the issue of dead neurons. It’s a recommended alternative to standard ReLU.

5. ELU (Exponential Linear Unit): ELU is valuable when you want the network to capture both positive and negative values within the hidden layers. It addresses the dying ReLU problem and can lead to faster convergence during training.

6. Swish: Swish is an activation function worth experimenting with, as it combines the computational efficiency of ReLU with a smoother, non-monotonic behavior. It has shown potential performance improvements in some architectures.

7. PReLU (Parametric ReLU): PReLU extends Leaky ReLU by allowing each neuron to learn its optimal alpha parameter. This can be beneficial when you want the network to adapt its activation function during training.

8. RReLU (Randomized Leaky ReLU): RReLU introduces randomness as a form of regularization during training. It can help prevent overfitting and enhance the generalization ability of the network.

9. PELU (Parametric Exponential Linear Unit): PELU extends ELU by enabling neurons to learn their alpha parameter. This flexibility can be advantageous in various scenarios, allowing the network to adapt to the data.

10. Softmax: Softmax activation is essential for multi-class classification problems in the output layer. It transforms a vector of real numbers into a probability distribution over multiple classes, enabling the network to make class predictions.

11. Softplus: Softplus is a smooth approximation of ReLU and can be helpful when you need a smooth activation function with continuous and differentiable derivatives.

12. ArcTan: ArcTan squashes input values to a limited range between -π/2 and π/2. It can be suitable for specific applications where you need to restrict the output within this range.

13. GELU (Gaussian Error Linear Unit): GELU is popular in transformer models and combines a smooth function with Gaussian noise, potentially leading to improved model performance.

14. Swish-1: Swish-1 is a variant of Swish with a division operation. It offers a different activation profile compared to standard Swish and is worth considering in experimentation.

15. ISRLU (Inverse Square Root Linear Unit): ISRLU is a smooth alternative to ReLU that can be helpful when you want to maintain smooth gradients throughout the network.

16. SELU (Scaled Exponential Linear Unit): SELU encourages automatic activation normalization and can lead to better training performance, especially in deep neural networks.

17. SoftExponential: SoftExponential introduces nonlinearity with a learnable parameter, allowing the network to adapt to specific data distributions.

18. Bipolar Sigmoid: Bipolar Sigmoid maps inputs to the range between -1 and 1, which can be beneficial when you want to model data with positive and negative values.

19. Binary Step: Binary Step is the simplest activation function, providing binary outputs based on a specified threshold. It’s suitable for binary decision problems.

In summary, the choice of an activation function should be guided by the specific requirements of the neural network, the problem being addressed, and the characteristics of the data. Experimentation and fine-tuning are often necessary to determine the most effective activation function for the given task and architecture.

--

--

Aarafat Islam
𝐀𝐈 𝐦𝐨𝐧𝐤𝐬.𝐢𝐨

🌎 A Philomath | Predilection for AI, DL | Blockchain Researcher | Technophile | Quick Learner | True Optimist | Endeavors to make impact on the world! ✨