#2 ML … Gradient Descent

Abhinaba Bala
Jul 26, 2017 · 1 min read

26.7.17 wednes
!! Getting stuck very early on may be avoided by taking different values for y=mx+c and then taking the min of all the diff. cases!! (???)

Q- What are the different methods to avoid stuck at local-minima? Is there a single good method which will never stuck at local-minima, given a good enough time?
A-

!1 first m = (last h — first h)/(last x — first x)
first c = sum(h(x)) / N

!!!2 in each step of optimization
take the best of a)m=m-alpha(fm) ; c = c-alpha(fc)
b)m = m-alpha(fm) ; c = c
c)m = m ; c = c- alpha(fc)

Q- Where do I get good, reliable and working data for Gradient Descent(to start) ?
A-

My learning of Machine Learning

Learn ML rather than use it as black box.

Learn ML rather than use it as black box.

Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade