Let us let the cat out of the box. Why do we have such a hard time when asking for help? Intrinsically, we all know that asking for help comes with many, even science-proven, benefits such as connecting with others, increasing productivity, maturing our mindset, and making us happier.
Now, we might all have our own reasons for not asking for help, but there are a few common thinking patterns, such as:
Blogging is an interesting and easy way to put out your emotions. It not only complements your current skills but also can be a great full-time career. If you are in the technology field, you have ample topics to write on and prove your skills.
When it comes to Data Science, just completing those online courses and getting skills is not enough. Building your personal brand is equally important now. That personal brand can be built in various ways one of which is blogging. …
Today we will be building a neat bare bones Spam Message Classifier, a Natural Language Processing based model. Then we will build a flask application which will render an HTML based home page and a prediction page. The user will input text in the home page and the application will predict whether it seems like a Spam message or a Ham (not spam) on the prediction page. This flask API will be deployed in public host on Heroku. Heroku is the obvious choice here as it’s super quick and super easy. And did I mention it is free? Yes it…
Naive Bayes’ algorithm is a classification algorithm based on the famous Bayes’ theorem. So let’s first understand what Bayes’ theorem says and build the intuition for Naive Bayes theorem, how it works, and what is so naive about it?
Before diving into Bayes’ theorem, we need to understand a few terms —
Consider 2 events, A and B. When the probability of occurrence of event A doesn’t depend on occurrence of event B, then A and B are independent events. For example, if you have 2 fair coins, then the…
Your first Machine Learning integrated chat bot on Google’s DialogFlow in 4 steps!
Any Machine Learning model is pretty much useless unless you put it to some real life use. Running the model on Jupyter Notebook and bragging about 99.99% accuracy doesn’t help. You need to make an end-to-end application out of it to present it to the outer world. And chatbots are one fun and easy way to do that.
Every Machine Learning project is unique in its own ways. Although, for each such project, a predefined set of steps can be followed. There is no such strict flow to be followed, but a general template can be proposed.
The first step in not just an ML but any project is to simply define the problem at hand. You first need to understand the situation and the problem which needs to be solved. And then devise how Machine Learning would solve that efficiently. Once you know the problem well, you then head on to solve it.
Principal Component Analysis is the process of compressing a lot of data into something that captures the essence of the data.
PCA (Principal Component Analysis) is a technique which finds the major patterns in data for dimensionality reduction. Reading this line for the first time may trigger a few questions:
What are these patterns?
How to find these patterns?
What is dimensionality reduction?
And what are dimensions anyway?
And why to reduce them?
Let’s go around each of them one by one.
Assume we have a dataset with, say, 300 columns. So our dataset has 300 dimensions. Neat. But do…
In this article we will be going to hard-code Logistic Regression and will be using the Gradient Descent Optimizer. If you need a refresher on Gradient Descent, go through my earlier article on the same. Here I’ll be using the famous Iris dataset to predict the classes using Logistic Regression without the Logistic Regression module in scikit-learn library. Let’s start!
Let’s start by importing all the required libraries and the dataset. This dataset has 3 classes. But I will be demonstrating the Gradient Descent solution using only 2 classes to make it easier for you to understand. The data set…
How do I know what is the correct Loss Function for my Algorithm? Because if I choose the wrong Loss Function, I’ll get the wrong solution.
In Machine Learning, our main goal is to minimize the error which is defined by the Loss Function. And every type of Algorithm has different ways of measuring the error. In this article I’ll be going through some basic Loss Functions used in Regression Algorithms and why exactly are they that way. Let’s begin.
Suppose we have gotten 2 loss functions. Both functions will have different minima. So if you optimize the wrong loss…
What is Optimization?, Techniques for optimization — numerical approach and iterative approach, and finally implementation in Python.
Optimization is at the core of Machine Learning.
Optimization, in very strict terms, is the process of finding the values for which your Cost Function gives a minimum value. For any Optimization problem with respect to Machine Learning, there can be either a numerical approach or an analytical approach. The numerical problems are Deterministic, meaning that they have a closed form solution which doesn’t change. Hence it is also called time invariant problems. These closed form solutions are solvable analytically. …
Data Science and Machine Learning Enthusiast