FAU Lecture Notes in Pattern Recognition

Can my Kernel really be Implemented as a Transform?

Mercer’s Theorem and the Kernel SVM

Andreas Maier
CodeX
Published in
10 min readApr 22, 2021

--

Image under CC BY 4.0 from the Pattern Recognition Lecture

These are the lecture notes for FAU’s YouTube Lecture “Pattern Recognition”. This is a full transcript of the lecture video & matching slides. The sources for the slides are available here. We hope, you enjoy this as much as the videos. This transcript was almost entirely machine generated using AutoBlog and only minor manual modifications were performed. If you spot mistakes, please let us know!

Navigation

Previous Chapter / Watch this Video / Next Chapter / Top Level

Welcome back everybody to Pattern Recognition! Today we want to explore a bit more the concept of kernels. We will look into something that is called Mercer’s Theorem. Looking forward to exploring kernel spaces!

Image under CC BY 4.0 from the Pattern Recognition Lecture

Let’s have a look into kernels! So far we’ve seen that the linear decision boundaries in their current form have serious limitations. It’s too simple to provide good decision boundaries. Nonlinearly separable data cannot be classified. Noisy data cause…

--

--

Andreas Maier
CodeX
Writer for

I do research in Machine Learning. My positions include being Prof @FAU_Germany, President @DataDonors, and Board Member for Science & Technology @TimeMachineEU