## Data Science

# Mathematics of Principal Component Analysis with R Code Implementation

## Theoretical foundations of principal component analysis (PCA) with R code implementation

# I. Introduction

In machine learning, a dataset containing features (predictors) and discrete class labels (for a classification problem such as logistic regression); or features and continuous outcomes (for a linear regression problem), is used to build a predictive model that can make predictions on unseen data. The predictive power of a model depends greatly on the quality and size of the training dataset.

Generally, the larger the dataset the better, however, there is a tradeoff between the size of the dataset and computational time needed for training. It turns out that in very large datasets, there might be lots of redundancy in the features or lots of unimportant features in the dataset, and hence dimensionality reduction techniques could be used for selecting only a limited number of relevant features needed for training.

**Principal Component Analysis (PCA)** is a statistical method that is used for feature extraction. PCA is used for high-dimensional and highly correlated data. The basic idea of PCA is to transform the original space of features into the space of principal components, as shown in **Figure 1** below:

A PCA transformation achieves the following:

**a) **Reduce the number of features to be used in the final model by focusing only on the components accounting for the majority of the variance in the dataset.

**b) **Removes the correlation between features.

# II. Mathematical Basis of PCA

Suppose we have a highly correlated features matrix with *4* features and *n *observation as shown in **Table 1** below:

To visualize the correlations between the features, we can generate a scatter plot, as shown in **Figure 1**. To quantify the degree of correlation between features, we can compute the covariance matrix using this equation:

In matrix form, the covariance matrix can be expressed as a 4 x 4 symmetric matrix:

This matrix can be diagonalized by performing a unitary transformation (PCA transformation) to obtain the following:

Since the trace of a matrix remains invariant under a unitary transformation, we observe that the sum of the eigenvalues of the diagonal matrix is equal to the total variance contained in features X1, X2, X3, and X4. Hence, we can define the following quantities:

Notice that when p = 4, the cumulative variance becomes equal to 1 as expected.

# III. R Implementation of PCA

To illustrate how PCA works, we show an example by examining the iris dataset. The R code can be downloaded from here: *https://github.com/bot13956/principal_component_analysis_iris_dataset/blob/master/PCA_irisdataset.R*

Let us look at the covariance matrix:

**Table 2** shows strong correlations between original features in the iris dataset. **Figure 2** is a pairplot that shows scatter plots, density plots, and correlation coefficients between original features. Notice the strong correlations between original features.

Let us now examine the transformed covariance matrix:

**Table 3** shows zero correlations between transformed features. **Figure 4** shows the pairplot in the PCA space. We see that the correlation between features has been removed.

**Table 4** contains a summary of helpful indicators from a PCA calculation:

Based on this summary, we see that 99.5 percent of the variance is contributed by the first three principal components (p = 3). This means that in the final model, the fourth principal component PC4 could be dropped since its contribution to the variance is negligible.

# IV. Summary and Conclusion

In summary, we’ve explained the mathematical foundations of PCA and we showed how the PCA algorithm can be implemented in R using the iris dataset for illustrative purposes. The R code used for performing the calculations can be downloaded from here: *https://github.com/bot13956/principal_component_analysis_iris_dataset/blob/master/PCA_irisdataset.R*

# Additional Data Science/Machine Learning Resources

Data Science Minimum: 10 Essential Skills You Need to Know to Start Doing Data Science

Essential Maths Skills for Machine Learning

3 Best Data Science MOOC Specializations

5 Best Degrees for Getting into Data Science

5 reasons why you should begin your data science journey in 2020

Theoretical Foundations of Data Science — Should I Care or Simply Focus on Hands-on Skills?

Machine Learning Project Planning

How to Organize Your Data Science Project

Productivity Tools for Large-scale Data Science Projects

A Data Science Portfolio is More Valuable than a Resume

Data Science 101 — A Short Course on Medium Platform with R and Python Code Included

** For questions and inquiries, please email me**: benjaminobi@gmail.com