Why do high learning rate diverges the weight updates?

Prash goel
Jul 6 · 3 min read
Gradient descent from x = -1 for learning rate of 0.8
Gradient descent from x = -1 for learning rate of 1.2
Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade