Bayesian Statistics and Causal Inference

The domain of causal inference is of huge importance for reasons such as estimating real effects of features, understanding the difference between what seems to be correlated and what is correlated, and building intelligent systems with some established relationships instead of data that represents..past.

When we train a classifier, we want to estimate the probability of a record/an image/a signal belonging to a particular class. This notion of probability is based on one of the two philosophical interpretations: Bayesian and Frequentist. Both of the approaches are used to quantify uncertainties where,

Bayesian Probability includes belief or rational3e while estimating probabilities. This means, given a set of features and their distribution (prior), what is the probability of a record being classified as say, X (posterior). This estimation does not only require data (based on which prior probability is calculated) but also takes into consideration the relationships (imagine graphs) between class and the features while estimating probabilities.

Frequentist Probability includes only data. It is assumed that a given data is sampled from some distribution and then what set of features maximises the probability of a record being classified as X.

Let’s try to understand the difference with an example: Let’s say my chances of going out depend only on whether it’s sunny and we have a dataset that captures the event of me going out, sunny/not sunny, and temperature value. As per Bayesian inference, the probability of me going out will depend on the prior probability of the sun shining and then the posterior probability of me going out given the sun shines. Whereas the frequentist inference will predict the chances of me going out based on the number of days I went out with features being sunny and temperature. It might be possible that all the time it was sunny and so the data might fail to establish the correlation between sunlight and me going out. It is also possible that the temperature rose when the sun shone that lead to a spurious correlation between temperature and me going out.

Many ML classifiers are based on frequentist inference that does not leverage the established relationships among features including the target class. Causal Inference is useful in leveraging these relationships and identify what actually caused a certain classification/prediction and what features are useless in the analysis. I will be sharing more on this domain in coming posts.

Cheers!

--

--

--

Lazy Explorer

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Graph Neural Networks for Multiple Object Tracking

A simple way to explain extracting more values from NLP

Gentle introduction to Neural Networks — Part 1 (Feedforward )

A Technical Introduction to Gesture Recognition for Data Scientists

Image Classification using FASTAI — Tutorial Pt.1

Video restoration and visual quality enhancment using U-net networks

It’s Bias and Variance

Helping Ebay users make smarter decisions with machine learning

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Urja Pawar

Urja Pawar

Lazy Explorer

More from Medium

Causal Inference

Explainable AI

Responsible Machine Learning for Survival Analysis

Explainability Using Bayesian Networks