Machine learning is the hot topic for this century because it will give rise to a phenomenon which will pose questions to human existence: what purpose will humans have if the machines can perform all the tasks as well as humans? Or perhaps even better. Now that our computers have been able to deal with all sorts of mathematical ‘literal’ operations to help us perform our daily life tasks, we’re on a conquest to make them able to actually ‘do’ the tasks independently without any human intervention. Sounds quirky, doesn’t it? Seems like all the scenes of sci-fi movies will be fragment of reality or possibly the whole of it. It sparks a lot of skepticism to hear about human like emotions and decision making capabilities of a software(or perhaps a robot which is just the hardware for the software) .The resolution of all the skepticism lies in us humans whose brains serve as perfect models for the machines because precisely the brain is the most intricately crafted thing on the planet with structuring and organisation intermingling in way that it’s design is totalitarian to govern the progress of the world. Computers will come close to humans as their decision making and predictive capabilities increase.

Warren Buffet highly advocates the idea of ‘Rational Thought’ which lays its firm grounds upon making judgement based on data and facts.This school of thought is responsible for his success in the stock market, predictive analysis based on previous data can help us anticipate the future conditions and that’s what majority of wall street does! Search for conditions which correspond to the ones in the past and predict i.e. what happened at an earlier time under the same conditions is likely to happen in the present or in the future. Seems nice, but umm.. what about the possibilities which have not taken place in the past and for which we don’t have any data? How do we account for that and make the prediction making accurate? This is where unsupervised machine learning comes into play. How it comes into play will be clear through the following course of the article.

Let’s take the case of a situation where predictive analysis based upon patterns of previous data failed. It was the 2008 fiscal cliff , the algorithms predicted that real estate was going to go up based upon market sentiment and prevalent conditions and this statement was in accordance with the ‘rational school of thought’ but all this is governed by a lot of human factors and the ‘free will’ of the market ; there was a disturbance in the supply-demand ratio and real estate actually went down. (Not a pro at economics but I hope you get the background I am trying to put forward). What I’m actually trying to say is that there was an outlier in the data which did not fit the modelling of the curve. Say, you have an x-axis and a y-axis , x-axis depicts time and y-axis depicts something like cost. For predicting you fit a general curve using which you can extrapolate now , if there’s an outlier which is at a really high deviation from the curve how in the world will you ever come to know of that unless you have that phenomenon physically taking place? Your predictions will not be really ‘wrong’ per se but the risk factor involved would be high due to no incorporation of the outliers in your data.

The answer lies in unsupervised machine learning where we actually provide no ‘training data set’ i.e the computer tries innumerable possibilities or iterations for a particular task in order to get the maximum types of inputs there can be for an algorithm, even the ones which might have never taken place are taken into account, clustering of similar data helps us to make predictions according to probabilities and classification of parameters.How this can be done is exemplified by real life examples.

Supposing I blindfold you and repeatedly throw you , I could throw you in any number of ways: from 2 feet,3 feet , 2 floors or any other height or push you hardly or softly etc. now with each time I push you your mind starts to predict that is you start forming conjectures as to what will happen the next time I push you say you form a hypothesis that there’s a 40% chance I will throw you from 2 feet height , 10% chance for 5 feet, 40% for 10 feet and 10% for 2 floors. Similarly we let the computer take millions of possibilities and execute the algorithm and see the result if it corresponds to the actual data hence pointing towards the accuracy of the system.

Even though we think of computers to be perfect , in such a case when a computer deals with abstraction we can’t expect 100% accuracy because what it’s dealing with is FUZZY LOGIC and not just simple 1+2+3. All FUZZY LOGIC is to computers what QUANTUM MECHANICS is to physics, it is somewhat analogous when it comes down superposition states and superposition of bits.