How using adaptive methods can help your network perform better

Paula Errasti
Bedrock — Human Intelligence
8 min readJul 23, 2021

--

An Artificial Neural Network (ANN) is a statistical learning algorithm that is framed in the context of supervised learning and Artificial Intelligence. It is composed of a group of highly connected nodes called neurons that connect an input layer (input), and an output layer (output). In addition, there may be several hidden layers between the previous two, a situation known as deep learning.

Algorithms like ANNs are everywhere in modern life, helping to optimise lots of different processes and make good business decisions. If you want to read a more detailed introduction to Neural Network algorithms, check out our previous article, but if you’re feeling brave enough to get your hands dirty with mathematical details about ways to optimise them, you’re in the right place!

Optimisation techniques: Adaptive methods

When we train an artificial neural network, what we are basically doing is solving an optimisation problem. A well optimised machine learning algorithm is a powerful tool, it can achieve better accuracy while also saving time and resources. But, if we neglect the optimisation process, we can cause very negative consequences. For instance, the algorithm might seem perfect during the tests but fail resoundingly in the real world, or we could have incorrect underlying assumptions about our data and amplify them when we implement the model. For this reason, it is extremely important to spend time and effort optimising a machine learning algorithm and, especially, a neural network.

The objective function that we want to optimise, -in particular, minimise-, in this case is the cost function or loss function J, which depends on the weights ω of the network. The value of this function is the one that informs us of our network’s performance, in other words, how well it solves the regression problem or classification that we are dealing with. Since a good model will make as few errors as possible, we want the cost function to reach its minimum possible value.

If you have ever read about neural networks, you will be familiar with the classic minimisation algorithm: the gradient descent. In essence, gradient descent is a way to minimise an objective function — J(ω) in our case — by updating its parameters in the opposite direction of the gradient of the objective function with respect to these parameters.

Unlike other simpler optimisation problems, the J function can depend on millions of parameters and its minimisation is not trivial. During the optimisation process for a neural network, it is common to encounter some difficulties like overfitting or underfitting, choosing the right moment to stop the training process, getting stuck in local minima or saddle points or having a pathological curvature situation. In this article we will explore some techniques to solve these two last problems.

Image taken from [2]

Remember that gradient descent updates the weights ω of the network in a step t+1th as follows:

In order to avoid these problems, we can input some variations in this formula. For instance, we could alter the learning rate α, modify the component relative to the gradient or even modify both terms. There are many different variations that modify the previous equation, trying to adapt it to the specific problem in which they are applied; this is the reason why these are called adaptive methods.

Let’s take a closer look at some of the most commonly used techniques:

1. Adaptive learning rate

The learning rate α is the network´s hyperparameter that controls how much the model must change, based on the cost function value, each time the weights are updated; it dictates how quickly the model adapts to the problem. As we mentioned earlier, choosing this value is not trivial. If α is too small, the training stage takes longer and the process may not even converge, while if it is too large, the algorithm will oscillate and may diverge.

Left: α too low, small steps and training takes longer. Right: α too high, oscillations and training may diverge

Although the common approach taking α = 0.01 provides good results, it has been shown that the training process improves when α stops being constant and starts depending on the iteration t. Below are three options that rephrase α’s expression:

  • Exponential decay:
  • Inverse decay:
  • Potential decay:

The constant parameter k controls how αₜ decreases and it is usually set by trial and error. In order to choose the initial value of α, α₀, there are also known techniques, but they are beyond the scope of this article.

Another simpler approach that is often used to adapt α consists in reducing it by a constant factor every certain number of epochs -training cycles through the full training dataset-. For example, dividing it by two every ten epochs. Lastly, the option proposed in [1] is shown below,

where α is kept constant during the first τ iterations and then decreases with each iteration t.

2. Adaptive optimisers

  • Momentum

We have seen that when we have a pathological curvature situation, the descent of the gradient has problems in the ravines [Image 1], the parts of the area where the curvature of the cost function is much greater along one dimension than the others. In this scenario, the gradient descent oscillates between the ridges of the ravine and progresses more slowly towards the optimum. To avoid this, we could use optimisation methods such as Newton’s known method, but this may significantly raise the computational power requirements since it would have to evaluate the Hessian matrix of the cost function for thousands of parameters.

The momentum technique was developed to dampen these oscillations and accelerate convergence of the training. Instead of only considering the value of the gradient at each step, this technique accumulates information about the gradient in previous steps to determine the direction in which to advance. The algorithm is set as follows:

where β ∈ [0,1] and m₀ is equal to zero.

If we set β = 0 in the previous equation, we see that we recover the plain gradient descent algorithm!

As we perform more iterations, the information of gradients from older stages has a lower associated weight; we are making an exponential moving average of the value of the weights! This technique is more efficient than the simple moving average since it quickly adapts the value of the prediction of fluctuations in recent data.

  • RMSProp

The Root Mean Square Propagation technique, better known as RMSProp, also deals with accelerating convergence to a minimum, but in a different way from Momentum. In this case we do not adapt the gradient term explicitly:

We have now introduced vₜ as the exponential moving average of the square of gradients. As an initial value it’s common to take v = 0 and the constant parameters equal to β = 0.9 and ϵ = 10⁻⁷.

Let’s imagine that we are stuck at a local minimum and the values of the gradient are close to zero. In order to get out of this “minimum zone” we would need to accelerate the oscillations by increasing α. Reciprocally, if the value of the gradient is large, this means that we are at a point with a lot of curvature, so in order to not exceed the minimum, we then want to decrease the size of the step. By dividing α by that factor we are able to incorporate information about the gradient in previous steps and increase α when the magnitude of the gradients is small.

  • ADAM

The Adaptative Moment Optimization algorithm, better known as ADAM, combines the ideas of the two previous optimisers above:

Here β₁ corresponds to the parameter of the Momentum and β₂ to the RMSProp.

We are now adding two additional hyperparameters to optimise in addition to α, so some might find this formulation counterproductive, but it is a price to be paid if we aim to accelerate the training process. Generally, the values taken by default are β₁ = 0.9, β₂ = 0.99 and ϵ = 10⁻⁷.

It has been empirically shown that this optimiser can converge faster to the minimum than other famous techniques like the Stochastic Gradient Descent.

Lastly, it is worth noting that it is common to make a bias correction in ADAM’s equation. This is because at the first stages ​​we would not have much available data from previous ones, and then the formula would be reformulated with

Conclusion

In summary, the goal of this article is to introduce some of the problems that may arise when we wish to optimise a neural network and the most well-known adaptive techniques to tackle them. We’ve seen that the combination of a dynamic learning rate with an adaptive optimiser can help the network learn much faster and perform better. We should remember, however, that Data Science is a field in constant evolution and while you were reading this article, a new paper may have been published trying to prove how a new optimiser can perform a thousand times better than all the ones mentioned here!

In future articles we will look at how to tackle the dreaded problem of an overfitted model and the vanishing gradient. Until then, if you need to optimise a neural network, don’t settle for the default configuration and use these examples to try to adapt it to your specific real problem or business application :)

References

[1] Bengio, Y. 2012. Practical Recommendations for Gradient-Based Training of Deep Architectures.

[2] Intro to optimization in deep learning: Momentum, RMSProp and Adam — Ayoosh Kathuria

[3] Kingma, Diederik and Jimmy Ba. 2014. Adam: A method for stochastic optimization.

[4] Zhen Xu, Andrew M. Dai, Jonas Kemp, Luke Metz. 2019. Learning an Adaptive Learning Rate Schedule.

--

--