Disclaimer! It’s not Final Fantasy Tactics! I mean I like the game, but this one is cool too.
It’s been a while, but we covered various types of Fourier Transforms. You can recap the summary from the below link. Today, we are going to cover something called Fast Fourier Transform (FFT) which is nothing but Discrete Fourier Transform in its optimized form for faster calculations.
I’m following the basic derivations of equations from awesome lectures taught by Dr. Wim van Drongelen. …
Today, we going to dive a little deep into a famous dimensionality reduction technique called Principal Component Analysis (PCA).
If you don’t know what PCA does, check the following video (5 mins)
Or if you like reading texts instead, check below:
In short, PCA projects the data onto a low dimension by maximizing the variance (or minimizing the distance from the chosen axis) of the data in each principal component axis.
Today’s focus is not to explain the intuitive meaning of PCA, but rather trying to understand the math behind the calculation of PCA.
The content is mainly from…
Today, we are going to study another famous decomposition called “Singular Value Decomposition (SVD)”.
We covered Eigendecomposition in the past so in case if you have missed it, check about the topic from the below links:
You could also learn about today’s topic from the famous and wonderful lectures from Dr. Gilbert Strang from MIT.
I would strongly recommend you actually watch his videos to understand the topic in depth.
Singular Value Decomposition (SVD) is another type of decomposition. Unlike eigendecomposition where the matrix…
Today, we are continuing to study the Positive Definite Matrix a little bit more in-depth. More specifically, we will learn how to determine if a matrix is positive definite or not. Also, we will learn the geometric interpretation of such positive definiteness which is really useful in machine learning when it comes to understanding optimization.
Just in case if you missed the last story talking about the definition of Positive Definite Matrix, you can check it out from below.
Today, we are studying more advanced topics in Linear Algebra that are more relevant and useful in machine learning.
We are building this knowledge on top of what we have already covered, so if you haven’t studied the previous materials, make sure to check them out first.
For the materials and structures, I’m following the famous and wonderful lectures from Dr. Gilbert Strang from MIT and you could see his lecture on today’s topic:
I would strongly recommend watching the video lectures from him because he explains concepts very well. Also, there are some minor materials I’m skipping in…
Previously, we finally stepped into Fourier Transform itself. You can take a look at the previous series from below.
Although the “(Continuous) Fourier Transform” we covered last time is great, practically, it is difficult to use it in real life. This is because, in real life, we are usually dealing with “discrete” data that was sampled using some kind of sensors.
Think of any time series data from traffic, weather, stocks, etc. We get the discrete values at each time point (e.g. 1 second, 2 second, 3 second, etc.) and we don’t know what’s in between those samples. We just…
Welcome to this series of stories towards understanding Linear Algebra. You could take a look at previous stories from here:
I’m following the basic structures and materials from Dr. Gilbert Strang from MIT and this story is about “Eigenvalues and Eigenvectors”! You could see his lecture on YouTube and I’m posting the relevant video from his lectures:
Before going into the detail of the eigenvalues and eigenvectors, I would like you to watch 20 mins video from 3blue1brown. This video definitely helps you grasp the concepts and make it smooth to understand the equations. …
Previously, we covered the basic ideas behind Fourier Series starting from the “Real Fourier Series”.
Extending to the “Complex Fourier Series”.
In this story, we are finally going to step into the “Fourier Transform”!
This will be a very important topic so try spending some time understanding this!
Again, main part of the derivations are from lectures taught by Dr. Wim van Drongelen and you could see his lecture online below. …
In this story, we are going to cover possibly one of the most important concepts in Linear Algebra, “Determinants”!
If you haven’t read the previous stories, you could check from here:
Determinants is a unique concept that memorizing the formula is rather simple, but understanding its meaning and true potential is often more challenging. In short, “determinant” is the scale factor for the area or volume represented by the column vectors in a square matrix. …
Last time, I covered Real Fourier Series based on lectures from Dr. Wim van Drongelen.
In this article, we are going to extend this to “Complex Fourier Series”.
Just a short recap from the last time. In Part 1, we went through “Real Fourier Series” where we wanted to approximate periodic signal f(t) with P(t) where P(t) was represented as follows:
So that when we figure out coefficients a’s and b’s, we get an approximated version of f(t) described as linear combinations of sines and cosines.
In the end, we got the equations for coefficients:
A Neuroengineer and Ph.D. candidate researching Brain Computer Interface (BCI). I want to build a cyberbrain system in the future. Nice meeting you!