Geoffrey Hinton Leads Google Brain Representation Similarity Index Research Aiming to Understand Neural Networks

Synced
SyncedReview
Published in
3 min readMay 27, 2019

A Google Brain research team led by Turing Award recipient Geoffrey Hinton recently published a paper that presents an effective method for measuring the similarity of representations learned by deep neural networks. The goal is to deepen human understanding of artificial neural networks for the purpose of better model training and debugging.

Deep neural networks are essentially a representation learning method — they extract features from input data and use these features to perform machine learning tasks. Neural networks however remain a black-box technology: Researchers cannot fully understand or characterize their internal mechanisms or decision-making processes.

Many academics now believe that comparing neural networks’ representations can help researchers understand their abstract operations and learn what may happen when their architecture or hyper-parameters are changed. Essentially researchers need a similarity index to measure these comparisons.

While previous work has made tremendous progress, Google researchers suggest in their new paper that these studies still “ignore the complex interaction between the training dynamics and structured data.” They suggest that different invariance properties of similarity indexes could affect results, and normalization techniques are necessary to ensure the similarity index is invariant to isotropic scaling and orthogonal transformation, but not to invertible linear transformation.

These discoveries led researchers to introduce centered kernel alignment (CKA) as a similarity index for comparing different neural networks or different hidden layers within the same neural network trained from different random initializations, widths, and scenarios. CKA is not a new concept — it was first introduced in 2002 as an approach for learning kernels based on the notion of centered alignment.

Researchers conducted experiments to compare a linear kernel-based CKA with other related similarity indexes, including linear regression, canonical correlation analysis (CCA), singular vector CCA (SVCCA), and projection-weighted CCA.

The results showed that CKA can not only outperform other indexes in revealing consistent relationships between layers of neural networks trained with different random initializations; but can also identify correspondences between layers across different network architectures. Moreover, CKA is able to correlate similar representations from the same models trained on different datasets. Researchers also created an approach to visualize what CKA is measuring.

CKA reveals some interesting and unexpected discoveries about neural networks. For example, in layers of individual convolutional neural networks with 8x depths (layers are repeated eight times), “CKA indicates that representations of more than half of the network are very similar to the last layer.” That means that although classification accuracy in shallower architectures can be significantly improved with more depth, in the case of the 8x deeper network, accuracy plateaus less than halfway through the network.

The paper Similarity of Neural Network Representations Revisited is on arXiv.

Journalist: Tony Peng | Editor: Michael Sarazen

2018 Fortune Global 500 Public Company AI Adaptivity Report is out!
Purchase a Kindle-formatted report on Amazon.
Apply for Insight Partner Program to get a complimentary full PDF report.

Follow us on Twitter @Synced_Global for daily AI news!

We know you don’t want to miss any stories. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

--

--

Synced
SyncedReview

AI Technology & Industry Review — syncedreview.com | Newsletter: http://bit.ly/2IYL6Y2 | Share My Research http://bit.ly/2TrUPMI | Twitter: @Synced_Global