New applications of Principal Component Analysis(PCA) part2(Machine Learning)

Monodeep Mukherjee
2 min readMar 13, 2023
  1. Enhancing the performance of multiparameter tests of general relativity with LISA using Principal Component Analysis(arXiv)

Author : Sayantani Datta

Abstract : The Laser Interferometer Space Antenna (LISA) will provide us with a unique opportunity to observe the early inspiral phase of supermassive binary black holes (SMBBHs) in the mass range of 105−106M⊙, that lasts for several years. It will also detect the merger and ringdown phases of these sources. Therefore, such sources are extremely useful for multiparameter tests of general relativity (GR), where parametrized deviations from GR at multiple post-Newtonian orders are simultaneously measured, thus allowing for a rigorous test of GR. However, the correlations of the deviation parameters with the intrinsic parameters of the system make multiparameter tests extremely challenging to perform. We demonstrate the use of principal component analysis (PCA) to obtain a new set of deviation parameters, which are best-measured orthogonal linear combinations of the original deviation parameters. With the observation of an SMBBH of total redshifted mass, ∼7×105M⊙ at a luminosity distance of 3 Gpc, we can estimate the five most dominant PCA parameters, with 1-σ statistical uncertainty of ≲0.2. The two most dominant PCA parameters can be bounded to ∼O(10−4), while the third and fourth-dominant ones to ∼O(10−3). Measurement of the PCA parameters with such unprecedented precision with LISA makes them an excellent probe to test the overall PN structure of the GW phase evolutio

2. Deep Kernel Principal Component Analysis for Multi-level Feature Learning(arXiv)

Author : Francesco Tonin, Qinghua Tao, Panagiotis Patrinos, Johan A. K. Suykens

Abstract : Principal Component Analysis (PCA) and its nonlinear extension Kernel PCA (KPCA) are widely used across science and industry for data analysis and dimensionality reduction. Modern deep learning tools have achieved great empirical success, but a framework for deep principal component analysis is still lacking. Here we develop a deep kernel PCA methodology (DKPCA) to extract multiple levels of the most informative components of the data. Our scheme can effectively identify new hierarchical variables, called deep principal components, capturing the main characteristics of high-dimensional data through a simple and interpretable numerical optimization. We couple the principal components of multiple KPCA levels, theoretically showing that DKPCA creates both forward and backward dependency across levels, which has not been explored in kernel methods and yet is crucial to extract more informative features. Various experimental evaluations on multiple data types show that DKPCA finds more efficient and disentangled representations with higher explained variance in fewer principal components, compared to the shallow KPCA. We demonstrate that our method allows for effective hierarchical data exploration, with the ability to separate the key generative factors of the input data both for large datasets and when few training samples are available. Overall, DKPCA can facilitate the extraction of useful patterns from high-dimensional data by learning more informative features organized in different levels, giving diversified aspects to explore the variation factors in the data, while maintaining a simple mathematical formulation

--

--

Monodeep Mukherjee

Universe Enthusiast. Writes about Computer Science, AI, Physics, Neuroscience and Technology,Front End and Backend Development