Types of Optimization Algorithms used in Neural Networks and Ways to Optimize Gradient Descent
Anish Singh Walia
1.8K20

You note that Stochastic Gradient Descent is also used for mini-batch gradient descent. However, I would add that this is erroneously done: SGD is for one data point at a time. Using SGD when people mean mini-batch gradient descent is inaccurate and should be discouraged. There’s unfortunately a lot of misuse of theory and terminology these days.

Like what you read? Give Michel Valstar a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.