FAU Lecture Notes in Pattern Recognition, CODEX

How many Dimensions do we need in a linearly separating Space?

Linear Discriminant Analysis — Properties

Andreas Maier
CodeX
Published in
9 min readMar 22, 2021

--

Image under CC BY 4.0 from the Pattern Recognition Lecture

These are the lecture notes for FAU’s YouTube Lecture “Pattern Recognition”. This is a full transcript of the lecture video & matching slides. The sources for the slides are available here. We hope, you enjoy this as much as the videos. This transcript was almost entirely machine generated using AutoBlog and only minor manual modifications were performed. If you spot mistakes, please let us know!

Navigation

Previous Chapter / Watch this Video / Next Chapter / Top Level

Welcome back to Pattern Recognition. Today we want to continue thinking about discriminant modeling in feature transforms. We had this idea of doing, essentially, a classwise normalization in our feature space. And if we were to do so, then certain properties in this feature space would emerge that we want to have a look at today.

Image under CC BY 4.0 from the Pattern Recognition Lecture

This is essentially the pathway towards linear discriminant analysis. We have some input training…

--

--

Andreas Maier
CodeX
Writer for

I do research in Machine Learning. My positions include being Prof @FAU_Germany, President @DataDonors, and Board Member for Science & Technology @TimeMachineEU