Member-only story
FAU Lecture Notes in Pattern Recognition
What do Gaussians, Poisson and Binomial Probability Mass Function have in common?
The Exponential Family of Probability Density and Mass Functions
These are the lecture notes for FAU’s YouTube Lecture “Pattern Recognition”. This is a full transcript of the lecture video & matching slides. The sources for the slides are available here. We hope, you enjoy this as much as the videos. This transcript was almost entirely machine generated using AutoBlog and only minor manual modifications were performed. If you spot mistakes, please let us know!
Navigation
Previous Chapter / Watch this Video / Next Chapter / Top Level
Welcome back to Pattern Recognition! Today, we want to look a little more into the modeling of decision boundaries. In particular, we are interested in what is happening with other distributions. We are also interested in what is happening if we have equal dispersion or standard deviations in different distributions.
Now we want to look into a special case. The special case with the Gaussian here is that we have a covariance matrix that is identical for both classes. And if we do so, we can see that the formulation that we found earlier in the previous video collapses in particular to a simplification of the matrix A. So matrix A was the quadratic part, and this is simply a zero matrix now. This means essentially that the entire quadratic part cancels out because we are simply multiplying the one matrix with the inverse of the other matrix. They’re identical, so they simply turn out to be zero. The nice thing here is that we can already see from the formulation that we find here that we essentially have a line that is separating now those two distributions. And the line is essentially given by the difference between the two means and it’s weighted by the inverse covariance matrix. Of course, there is an offset and the offset is mainly dependent on the prior probability for the two classes. This is weighted then by the difference of the two means…