Internal Representation learned by Neural Networks and Why They are Compared

Gatha Varma, PhD
WiCDS
Published in
3 min readJan 16, 2021

Part 1 of 5: Understanding what makes up the internal representation of deep learning networks and their significance

It is always the small pieces that make the big picture.

What are representations learned by neural networks?

Neural networks, deep or shallow, fed forwards or relying on feedbacks, with memories, or with gates. Neural networks, fired by neurons, solving problems, and helping you make decisions. Different neural networks are employed to answer different types of problems. So what are the representations that define these deep neural networks? Neural networks create patterns of activations from the input data, learn patterns to train over problems, and solve tasks when they are trained. All these functions and more are achieved through neurons. So first we should answer the question that what comprises the representation of a neuron? Let us freeze time and look at a neuron at a particular epoch. It is located at a particular layer in the network, about to launch into a function that would have worked on the received inputs. So a representation of a neuron is the portrayal of all of its possible input → output mappings. Wait! Did I say ‘all’? Is that not going to be an infinite-sized set? True, and this is why researchers focus on a finite set of inputs drawn from a training or validation set.

Time to throw in some mathematics:

For a given dataset X = {x1, … xm} and a neuron i at layer l, the vector of outputs on X can be written as

zˡᵢ = (zˡᵢ (x1 ), · · · , zˡᵢ (xm ))

Here, zˡᵢ is the response of a single neuron over the entire dataset and not the response of a particular layer for a single input. In short, a neuron’s representation is a single vector in a high-dimensional space.

Now let us unfreeze time (remember we had suspended the neuron in time continuum for our ease of understanding?) and zoom out to look at all the neurons at work over the single layer. A single layer in a neural network can now be visualized as a set of neuron vectors that were contained within it.

To quote in formal terms, for a dataset X with m samples,

a neuron’s representation is a vector in set R and

a layer of the neural network is the subspace of R spanned by its neurons’ vectors.

Why Compare Representational Similarities of Neural Networks?

We already know about the different types of neural networks and their applications. Then why study the similarities between their representations?

While studies have been done to understand the dynamics of the training process of the neural networks, they largely overlooked the interactions between the training dynamics and the structured data. Understanding a network’s representation could give more insight into the interactions that take place between machine learning algorithms and the data. According to Kornblith et al., effective measurement of representational similarity could help answer many interesting questions, including:

(1) Do deep neural networks with the same architecture trained from different random initializations learn similar representations?

Spoiler: Confirmed true for linear and RBF kernels.

(2) Can we establish correspondences between layers of different network architectures?

Spoiler: Yes, we can.

(3) How similar are the representations learned using the same network architecture from different datasets?

Spoiler: Similar representations are developed in the early layers of the compared networks.

Coming back to the question that why do researchers compare internal representations of neural networks? The answer is to be able to understand how the input data is interpreted by different kinds of networks and thereby improve the design and efficiency of future machine learning systems.

In my coming articles, I would discuss two recent methods that were used to answer the above questions and more. Stay tuned!

--

--

Gatha Varma, PhD
WiCDS
Writer for

Reseach Scientist @Censius Inc. Find more of my ramblings at: gathavarma.com