Understanding Principal Component Analysis

  1. Calculate the covariance matrix X of data points.
  2. Calculate eigen vectors and corresponding eigen values.
  3. Sort the eigen vectors according to their eigen values in decreasing order.
  4. Choose first k eigen vectors and that will be the new k dimensions.
  5. Transform the original n dimensional data points into k dimensions.
  1. Find linearly independent dimensions (or basis of views) which can losslessly represent the data points.
  2. Those newly found dimensions should allow us to predict/reconstruct the original dimensions. The reconstruction/projection error should be minimized.
  • Theorem-1 :
    The inverse of an orthogonal matrix is its transpose, why?
  • Theorem-2 :
  • Theorem-3 :
  • The principal components of X are the eigenvectors of Cx.
  • The i th diagonal value of Cy is the variance of X along pi

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store