What is Bayesian Linear Regression?

Matthias Werner
dida Machine Learning
2 min readFeb 17, 2020

Bayesian regression methods are very powerful, as they not only provide us with point estimates of regression parameters, but rather deliver an entire distribution over these parameters. This can be understood as not only learning one model, but an entire family of models and giving them different weights according to their likelihood of being correct. As this weight distribution depends on the observed data, Bayesian methods can give us an uncertainty quantification of our predictions representing what the model was able to learn from the data. The uncertainty measure could be e.g. the standard deviation of the predictions of all the models, something that point estimators will not provide by default. Knowing what the model doesn’t know helps to make AI more explainable. To clarify the basic idea of Bayesian regression, we will stick to discussing Bayesian Linear Regression (BLR).

BLR is the Bayesian approach to linear regression analysis. We will start with an example to motivate the method. To make things clearer, we will then introduce a couple of non-Bayesian methods that the reader might already be familiar with and discuss how they relate to Bayesian regression.

In the following I assume that you have elementary knowledge of linear algebra and stochastics.

Let’s get started!

A simple example

The task we want to solve is the following: …

Since Medium doesn’t support LaTeX formulas, you find the remainder of the first and the second part of this series of posts only on our website’s blog:

What is Bayesian Linear Regression? (Part 1)

What is Bayesian Linear Regression? (Part 2)

Originally published at https://dida.do on February 17, 2020.

--

--

Matthias Werner
dida Machine Learning

Data Scientist at dida.do — my particular interests are Computer Vision, Reinforcement Learning and Bayesian Neural Networks