The end is near. No, not the world, but the 12A12D series. After Linear Regression, it’s time to add more DS flavour.
After the basics of Regression, it’s time for basics of Classification. And, what can be easier than Logistic Regression!
This is what Classification actually means:
You have been waiting for this since Day 1, ain’t it? Because we like it unconventional, we saved it for the last. Next 2 days also cover Regression.
Linear Regression is the oldest, simple and widely used supervised machine learning algorithm…
A few days back, when I told my friend that having no girlfriend in college and having no girlfriend in school are two independent events, I was told that I am being Naive.The same happened to Naive Bayes Algorithm when it assumed that all the features are independent of each other.
In our 12A12D series, we are trying to be as diverse as possible. I can bet that 90% of you are unaware of ARM Techniques, even though they are basics of Data Mining.
After Supervised Learning algorithms, it’s time to have a look at the most popular Unsupervised method. Here, we present to you - Clustering, and it’s variants.
Let’s look at it’s simplicity here:
As the 12A12D series is progressing, there are some miscellaneous topics to be covered. One of them is Dimensionality Reduction, or simply, dealing with large number of dimensions/features/variables.
After completing 4 Algos of our 12 Algos in 12 Days series, here is one of the most popular and most talked about Algorithm since its development. Best thing I like about SVM is that it’s completely based on Mathematical Optimization problem.
I know it’s getting arduous to catch up with out daily posts, but it’s just meant to be this. 12 Algos in 12 days.Here, we unbox one of the most powerful ML technique used by Grandmasters to win Data Hackathons on Kaggle. You’ll consider yourself lucky if you understand this properly.