# 10 Machine Learning Flavors in sklearn

It is easy to paint machine learning in broad strokes, just one big black box where you plug in what you have and something relevant comes out the other side. But in reality it is much more complicated, and a wide variety of tools that fit better to specific problems.

Before computers much of this mathematically calculated by statisticians and some very complex models (here is a great article about statistics and machine learning). Modern computers can run these algorithms in minutes depending on the amount of data at hand.

Today machine learning and AI (artificial intelligence) has grown exponentially. For many the barrier to entry like expensive computing servers or specialist programmers are being lowered. A solid computer can run models, and if there is too much data or complexity, with one click away one can rent time on the Amazon Cloud with all the horse power you want.

Here is a great primer that explains some of the conceptual differences between types of algorithms:

**A Tour of Machine Learning Algorithms**

*In this post, we take a tour of the most popular machine learning algorithms. It is useful to tour the main algorithms…*machinelearningmastery.com

### Types

There are several classes of machine learning and the algorithm reflects that application. Here are the basic breakdowns as defined by Scikit-learn (sklearn), a great library for machine learning:

**scikit-learn: machine learning in Python - scikit-learn 0.18.1 documentation**

*Edit description*scikit-learn.org

**Classification **

Identifying to which category an object belongs to.

Applications: Spam detection, Image recognition.

Algorithms: SVM, nearest neighbors, random forest, … **Regression**

Predicting a continuous-valued attribute associated with an object.

Applications: Drug response, Stock prices.

Algorithms: SVR, ridge regression, Lasso, …

Clustering

Automatic grouping of similar objects into sets.

Applications: Customer segmentation, Grouping experiment outcomes

Algorithms: k-Means, spectral clustering, mean-shift, …

Here is Microsoft's cheat sheet on which algorithm to use:

**Machine learning algorithm cheat sheet**

*The Microsoft Azure Machine Learning Algorithm Cheat Sheet helps you choose the right algorithm for a predictive…*docs.microsoft.com

For each of these groups we are going to dig in a bit deeper but by no means is this an exhaustive list. For me it was a great way to learn a little more about each one and it should provide a light overview when you are thinking of what is out there. Each title is linked to the related sklearn page so you can explore the parameters if you want to for your own projects.

### Classification

A Classification Algorithm is a procedure for selecting a hypothesis from a set of alternatives that best fits a set of observations. Or in normal words, it is way to determine which group an object belongs to using multiple variables.

**Random Forest Classifier**** **(Classification)

**3.2.4.3.1. sklearn.ensemble.RandomForestClassifier - scikit-learn 0.18.1 documentation**

*class sklearn.ensemble. RandomForestClassifier( n_estimators=10, criterion='gini', max_depth=None, min_samples_split=2…*scikit-learn.org

Random Forest is a commonly used algorithm, so called after the multitude of decisions trees it uses to either classify (RandomForestClassifier) or mean prediction (RandomForestRegressor) of the individual trees. It uses various sub-samples of the dataset to cut the whole dataset into multiple different working sets and averaging the result to improve the predictive accuracy and control overfitting.

It is a fairly strong model and used widely on Kaggle for its versatility, it was actually the first one I used in my Titanic problem.

sklearn.ensemble.RandomForestClassifier

*Fast, simple to use and robust with noise and missing data, but may be difficult to interpret. There is a reason it is growing in popularity.*

**K Nearest Neighbors Classifier**** **(Classification)

**1.6. Nearest Neighbors - scikit-learn 0.18.1 documentation**

*Despite its simplicity, nearest neighbors has been successful in a large number of classification and regression…*scikit-learn.org

Nearest Neighbors-based classification is a type of instance-based learning where classification is determined from a simple majority vote of the nearest neighbors of each point. Out of the box is uses uniform weights, but that can be manually changed to fine tune the model. In cases where the data is not uniformly sampled the RadiusNeighborsClassifier can also be used as it relies a fixed radius set by the user.

sklearn.neighbors.KNeighborsClassifier

sklearn.neighbors.KNeighborsRegressor

*Simple, powerful with no training set need, but hardware expensive and slow on new instances and performs poorly as dimensionality increases.*

**SVM** (Classification/Regression)

**1.4. Support Vector Machines - scikit-learn 0.18.1 documentation**

*The support vector machines in scikit-learn support both dense ( numpy.ndarray and convertible to that by numpy.asarray…*scikit-learn.org

There are multiple types of Support Vector Machines (C-Support Vector Classification, LinearSVC or SVR on the regression side for example). They are popular in text classification problems where very high-dimensional/features are the norm but Random Forest seems to be stealing their crown.

sklearn.svm.SVC

sklearn.svm.LinearSVC

sklearn.svm.NuSVC

sklearn.svm.SVR

sklearn.svm.LinearSVR

sklearn.svm.NuSVR

sklearn.svm.OneClassSVM

*Great for complex non-linear relationships and good with noise, but parameter control can get complicated and uses a lot of memory and processing power as it scales*

**Gradient Boosting** (Classification/Regression)

**3.2.4.3.5. sklearn.ensemble.GradientBoostingClassifier - scikit-learn 0.18.1 documentation**

*class sklearn.ensemble. GradientBoostingClassifier( loss='deviance', learning_rate=0.1, n_estimators=100, subsample=1.0…*scikit-learn.org

Gradient Boosting is combination of Gradient Descent and Boosting. It builds in a forward stage-wise manner an aggregating of weak prediction models to produce a strong prediction model through arbitrary differentiable loss functions. Similar Boosting algorithms would be ADABoost where the sum of the parts is much stronger than the bunch of weak predictions that make it up.

sklearn.ensemble.GradientBoostingClassifier

sklearn.ensemble.GradientBoostingRegressor

*Handles missing values well and no need to transform variables, but it can overfit and is struggles when scaling.*

**Gaussian NB/Gaussian Naive Bayes** (Classification)

**sklearn.naive_bayes.GaussianNB - scikit-learn 0.18.1 documentation**

*This documentation is for scikit-learn version 0.18.1 - Other versions*scikit-learn.org

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naïve” assumption of independence between every pair of features. Basically it looks at each feature (like “red” and “round”) as independent features and determines the classification probity for each (i.e. if it is an apple) rather than trying to consider multiple features together and then determine the probability as a whole. This is what helps make it fast.

There are several variations on Naive Bayes, including MultinomialNB for multinomially distributed data and BernoulliNB for multivariate Bernoulli distributions or in other words it specifically requires binary-valued (Bernoulli, boolean) variables as features.

sklearn.naive_bayes.GaussianNB

sklearn.naive_bayes.MultinomialNB

sklearn.naive_bayes.BernoulliNB*Fast for classification and can be trained on partial set of data if the whole data set it too big to be put in memory, but has assumption about feature independence that may not hold true in the real world*

### Regression

A Regression Algorithm uses a statistical based approach for estimating the relationships among variables. The result can be a linear regression usually represented as a line of best fit in a scatterplot, or may be a more complicated depiction of a dependent variable and one or more independent variables (or ‘predictors’). One of the simplest is a direct linear regression (sklearn.linear_model.LinearRegression) which is perfect for data exploration in visualizations.

**Logistic Regressions** (Regression)

**sklearn.linear_model.LogisticRegression - scikit-learn 0.18.1 documentation**

*This class implements regularized logistic regression using the 'liblinear' library, 'newton-cg', 'sag' and 'lbfgs…*scikit-learn.org

Logistic Regressions are used to predict the odds of a binary state dependent variable (what you are trying to predict) based on the values of the independent variables (features). Basically it will try to determine, based on the features, if the final result either is, or is not.

sklearn.linear_model.LogisticRegression

*Nice probabilistic focus and fast to train on big data, but requires work to make it fit a non-linear functions.*

**Random Forest Regressor**** **(Regression)

**3.2.4.3.2. sklearn.ensemble.RandomForestRegressor - scikit-learn 0.18.1 documentation**

*class sklearn.ensemble. RandomForestRegressor( n_estimators=10, criterion='mse', max_depth=None, min_samples_split=2…*scikit-learn.org

Like before in the Random Forest Classifier, the Random Forest Regressor fits a number of classifying decision trees on various sub-samples of the dataset and use averaging to improve the predictive accuracy and control over-fitting.

sklearn.ensemble.RandomForestRegressor

*Faster training and does well with noise and missing values, Computer resourcing grows when scaling for accuracy as the number of forests increase.*

**Ordinary Least Squares/Ridge Regression**** **(Regression)

**1.1. Generalized Linear Models - scikit-learn 0.18.1 documentation**

*The is a linear model that estimates sparse coefficients. It is useful in some contexts due to its tendency to prefer…*scikit-learn.org

Ridge Regression is an optimization of Ordinary Least Squares Regression. They are both linear regression models and a method for estimating the unknown parameters in a linear regression model, with the goal of minimizing the sum of the squares of the differences between the observed responses in the given dataset and those predicted by a linear function of a set of explanatory variables. The key difference is that these focus on regularization to prevent overfitting as the coefficient increases. For real detail take a look here.

sklearn.linear_model.Ridge

*Ordinary Least Square is the more commonly used regression, but it struggles with outliers and anything, obviously, that is non-linear in nature*

### Clustering

Discovering structure through clustering of observations/dependent variables based on features. It is unsupervised machine learning method which basically means there is no splitting into training sets and is used in data mining.

**k-Means**** **(Clustering)

**sklearn.cluster.KMeans - scikit-learn 0.18.1 documentation**

*When pre-computing distances it is more numerically accurate to center the data first. If copy_x is True, then the…*scikit-learn.org

The k-means problem is solved using Lloyd’s algorithm. It’s goal is to clusters data by trying to separate samples in n groups of equal variance. The number (k) of clusters must be specified.

sklearn.cluster.KMeans

sklearn.cluster.MiniBatchKMeans

*Scales well to large number of samples and is fast, but is the clusters are not very spherical it can struggle and running it repeatedly may not return exactly the same answer*

**Mean-Shift**** **(Clustering)

**sklearn.cluster.MeanShift - scikit-learn 0.18.1 documentation**

*Mean shift clustering aims to discover "blobs" in a smooth density of samples. It is a centroid-based algorithm, which…*scikit-learn.org

Mean-Shift is a non-parametric (i.e. doesn’t expect bell curve distribution) feature-space analysis technique for locating the maxima of a density function. Clustering is iteratively creating “high” peaks of density.

sklearn.cluster.MeanShift

*Good for uneven distribution in a data set, but it is not highly scalable*

### Conclusion

The best algorithm is always the one out of MANY you try that produces the best result. As I researched this article (and will go back and update it as I use them more) and after two cups of considerably strong coffee it is clear that these are very complex tools with many parameters that can be tuned to match the data.

Just like when you are car shopping you don’t just test drive one car and that is it. Likewise it would be a disservice to only use one algorithm when you are solving a data problem.

Happy Hunting!