Learning-based models for Classification
Introduction
Thousands of learning algorithms have been developed in the field of machine learning. Scientists typically select from among these algorithms to solve specific problems. Their options are frequently limited by their familiarity with these algorithms. In this classical/traditional machine learning framework, scientists are forced to make some assumptions in order to use an existing algorithm. While this can be limiting in some scenarios, it can offer the benifit of speed, low cost computing and ease of use as a tradeoff for overfitting and reduced accuracy.
In this article, we have built multiple learning based models for Classification on Sequential data (ECG) to detect the probability of heart disease. Ensemble Learning, Support Vector Machines, Bernoulli Nave Bayes, K Nearest Neighbor, and the Random Forest Classifier are explored in great depth for our classification task. You can check out the code and the dataset in this repository.
Naïve Bayes
The Naive Bayes classifier divides data into classes using Bayes’ Theorem and the assumption that all predictors are independent of one another. It is assumed that the presence of one feature in a class is unrelated to the presence of other features.