Member-only story
Implementing Various Root-Finding Algorithms in Python
With an actual application in data science and logistic regression
In data science, you will find that many of our tasks include maximising or minimising a statistic. In regression, you find parameters that minimise the sum of squares of error. In naïve Bayes, you identify the class that maximises the posterior probability. Many other examples exist, such as the decision tree maximising information gain, SVM maximising margin, the EM algorithm maximising the expectation of complete log-likelihood, etc. Some of these calculations are basic enough that simple algebra would do the trick just fine. However, some more complex calculations require numerical algorithms to approximate them.
The most popular optimisation algorithm used for machine learning is, of course, the Gradient Descent Algorithm. But there are other methods to numerically approximate the maxima or minima of a function. This article will show you some Root-Finding Algorithms that can be a substitute for the Gradient Descent Algorithm. Note that each root-finding algorithm has specific requirements to converge; hence, no one algorithm can work universally on all problems.
Root-finding algorithms are numerical methods that approximate an x value that satisfies f(x) = 0 of any continuous function…