Demystifying Deep Learning Optimizers: Exploring Gradient Descent Algorithms (Part 3)

A Comprehensive Guide to Nesterov Accelerated Gradient (NAG)

Hemant Rattey
Nerd For Tech
4 min readJun 17, 2024

--

In the second part of this series, we discussed the concept of momentum and how the Exponential Weighted Moving Average (EWMA) technique enhances the convergence speed and stability of gradient descent.

In this article, we will learn about another advanced optimization technique called Nesterov Accelerated Gradient (NAG). This technique builds upon the momentum concept and offers further improvements in optimization efficiency.

Limitations of Momentum-Based Gradient Descent

While Momentum-based Gradient Descent has several advantages, it also has its limitations:

  1. Risk of Overshooting: The accumulated momentum can sometimes cause the algorithm to overshoot the optimal point, especially in regions where the cost function changes rapidly. This leads to oscillations around the optimal minima and leads to slower convergence.
  2. Plateaus and Saddle Points: Momentum-based Gradient Descent might struggle with plateaus or saddle points, where gradients are nearly zero, leading to slow progress.
  3. Dependence on Hyperparameters: Choosing the right values for the learning rate and momentum parameter can be tricky, and incorrect values can either amplify issues or dampen the benefits.

Introducing Nesterov Accelerated Gradient (NAG)

Nesterov Accelerated Gradient (NAG) is an optimization algorithm designed to address some of the limitations of Momentum-based Gradient Descent. It provides a more accurate way of incorporating momentum into the update process by looking at the gradient in the future.

How NAG Works

NAG improves upon Momentum-based Gradient Descent by adding a correction factor to the gradient calculation. Instead of computing the gradient at the current position, NAG computes the gradient at the predicted next position. This allows the algorithm to make a more informed update by damping the oscillations, leading to faster and more reliable convergence.

Key Concepts

  1. Look-Ahead Gradient: NAG calculates the gradient not at the current position but at the estimated future position. This “look-ahead” step provides a better approximation of where the momentum will take the parameters, leading to more precise updates.
  2. Enhanced Stability: By considering the look-ahead gradient, NAG reduces the risk of overshooting and provides a more stable optimization path, even in regions with steep changes in the cost function.

Implementation

The update rule for NAG can be expressed as follows:

  • Compute the look-ahead gradient:
  • Update the velocity:
  • Update the parameters:

Where:

  • vt is the velocity vector at iteration t.
  • β is the momentum hyperparameter, typically set between 0 and 1.
  • J(θt​) is the gradient of the cost function with respect to the parameters at iteration t.
  • θt​ represents the parameters at iteration t.
  • α is the learning rate.

Limitations

While NAG offers significant improvements, it still requires careful tuning of hyperparameters such as the learning rate and momentum parameter. Incorrect settings can negate the benefits of NAG or even worsen performance. Moreover, since NAG leads to damping of the oscillations, it can sometimes get stuck on a local minima because it doesn't have the help of the momentum to overshoot the local minima. This leads to an unstable gradient descent and a sub-optimal solution.

Conclusion

In this article, we explored the Nesterov Accelerated Gradient (NAG), an advanced optimization technique that enhances the benefits of momentum-based gradient descent. NAG provides faster and more stable convergence by incorporating a look-ahead gradient step.

But our journey doesn’t end here. We’re moving forward to explore another optimization technique in the next part: Adagrad (Adaptive Gradient Algorithm). As we learn more, we’re getting better at optimizing, getting our deep learning models to perform even better and more efficiently. Stay tuned!

Please feel free to give me feedback regarding the same.

Come say Hi! to me on Twitter.

--

--

Hemant Rattey
Nerd For Tech

Data Scientist | Writing about Deep Learning and NLP | Portfolio: hemantrattey.github.io