# Mathematics of Principal Component Analysis with R Code Implementation

## Theoretical foundations of principal component analysis (PCA) with R code implementation

May 25, 2020 · 5 min read

# I. Introduction

Generally, the larger the dataset the better, however, there is a tradeoff between the size of the dataset and computational time needed for training. It turns out that in very large datasets, there might be lots of redundancy in the features or lots of unimportant features in the dataset, and hence dimensionality reduction techniques could be used for selecting only a limited number of relevant features needed for training.

Principal Component Analysis (PCA) is a statistical method that is used for feature extraction. PCA is used for high-dimensional and highly correlated data. The basic idea of PCA is to transform the original space of features into the space of principal components, as shown in Figure 1 below:

A PCA transformation achieves the following:

a) Reduce the number of features to be used in the final model by focusing only on the components accounting for the majority of the variance in the dataset.

b) Removes the correlation between features.

# II. Mathematical Basis of PCA

To visualize the correlations between the features, we can generate a scatter plot, as shown in Figure 1. To quantify the degree of correlation between features, we can compute the covariance matrix using this equation:

In matrix form, the covariance matrix can be expressed as a 4 x 4 symmetric matrix:

This matrix can be diagonalized by performing a unitary transformation (PCA transformation) to obtain the following:

Since the trace of a matrix remains invariant under a unitary transformation, we observe that the sum of the eigenvalues of the diagonal matrix is equal to the total variance contained in features X1, X2, X3, and X4. Hence, we can define the following quantities:

Notice that when p = 4, the cumulative variance becomes equal to 1 as expected.

# III. R Implementation of PCA

Let us look at the covariance matrix:

Table 2 shows strong correlations between original features in the iris dataset. Figure 2 is a pairplot that shows scatter plots, density plots, and correlation coefficients between original features. Notice the strong correlations between original features.

Let us now examine the transformed covariance matrix:

Table 3 shows zero correlations between transformed features. Figure 4 shows the pairplot in the PCA space. We see that the correlation between features has been removed.

Table 4 contains a summary of helpful indicators from a PCA calculation:

Based on this summary, we see that 99.5 percent of the variance is contributed by the first three principal components (p = 3). This means that in the final model, the fourth principal component PC4 could be dropped since its contribution to the variance is negligible.

# Additional Data Science/Machine Learning Resources

Data Science Curriculum

Essential Maths Skills for Machine Learning

3 Best Data Science MOOC Specializations

5 Best Degrees for Getting into Data Science

5 reasons why you should begin your data science journey in 2020

Theoretical Foundations of Data Science — Should I Care or Simply Focus on Hands-on Skills?

Machine Learning Project Planning

How to Organize Your Data Science Project

Productivity Tools for Large-scale Data Science Projects

A Data Science Portfolio is More Valuable than a Resume

Data Science 101 — A Short Course on Medium Platform with R and Python Code Included

For questions and inquiries, please email me: benjaminobi@gmail.com

Written by

Written by