Member-only story
A Visual Learner’s Guide to Explain, Implement and Interpret Principal Component Analysis (PCA)
Linear Algebra for Machine Learning — Covariance Matrix, Eigenvector and Principal Component
In my previous article, we have talked about applying linear algebra for data representation in machine learning algorithms, but the application of linear algebra in ML is much broader than that.
This article will introduce more linear algebra concepts with the main focus on how these concepts are applied for dimensionality reduction, specially Principal Component Analysis (PCA). In the second half of this post, we will also implement and interpret PCA using a few lines of code with the help of Python scikit-learn.
When to Use PCA?
High-dimensional data is a common issue experienced in machine learning practices, as we typically feed a large amount of features for model training. This results in the caveat of models having less interpretability and higher complexity — also known as the…