Neural Network Activation Function Types

Understanding what really happens in a neural network

Farhad Malik
FinTechExplained
Published in
6 min readMay 18, 2019

--

This article aims to explain how the activation functions work in a neural network.

If you want to understand the basics of a neural network then please read:

What Is An Activation Function?

Activation function is nothing but a mathematical function that takes in an input and produces an output. The function is activated when the computed result reaches the specified threshold.

The input in this instance is the weighted sum plus bias:

Understanding The Formula

As an instance, if the inputs are:

And the weights are:

Then a weighted sum is computed as:

Subsequently, a bias (constant) is added to the weighted sum

Finally, the computed value is fed into the activation function, which then prepares an output.

Think of the activation function as a mathematical operation that normalises the input and produces an output. The output is then passed forward onto the neurons on the subsequent layer.

What Are Activation Function Thresholds?

The thresholds are pre-defined numerical values in the function. This very nature of the activation functions can add non-linearity to the output. Subsequently, this very feature of activation function makes neural network solve non-linear problems. Non-linear problems are those where there is no direct linear relationship between the input and output.

To handle these complex scenarios, a number of activation functions are introduced which can be configured on the inputs.

Photo by Boxed Water Is Better on Unsplash

Activation Function Types

--

--

Farhad Malik
FinTechExplained

My personal blog, aiming to explain complex mathematical, financial and technological concepts in simple terms. Contact: FarhadMalik84@googlemail.com