Member-only story

A Visual Learner’s Guide to Explain, Implement and Interpret Principal Component Analysis (PCA)

Linear Algebra for Machine Learning — Covariance Matrix, Eigenvector and Principal Component

Destin Gong
TDS Archive
11 min readJan 25, 2023

--

Principal Component Analysis for ML
Principal Component Analysis for ML (image from my website)

In my previous article, we have talked about applying linear algebra for data representation in machine learning algorithms, but the application of linear algebra in ML is much broader than that.

This article will introduce more linear algebra concepts with the main focus on how these concepts are applied for dimensionality reduction, specially Principal Component Analysis (PCA). In the second half of this post, we will also implement and interpret PCA using a few lines of code with the help of Python scikit-learn.

When to Use PCA?

High-dimensional data is a common issue experienced in machine learning practices, as we typically feed a large amount of features for model training. This results in the caveat of models having less interpretability and higher complexity — also known as the…

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Destin Gong
Destin Gong

Written by Destin Gong

On my way to become a data storyteller | Website: www.visual-design.net

Responses (3)