Neural Manifolds — Linear Algebra and Topology in Neuroscience

Niranjan Rajesh
Bits and Neurons
Published in
8 min readAug 26, 2023

Although I am a Computer Science major, my favourite course so far in my undergraduate journey has been an introductory course in Linear Algebra. It was quite a difficult course with weekly quizzes and multiple internal exams but despite all of that, when things came together and clicked in my head, it was very satisfying and exciting. I got a taste of that same excitement when I read the paper by Sadtler et. al. on Neural Manifolds. The paper cleverly establishes that neural activity is inherently constrained by properties of the physical network circuitry itself. These constraints result in neural activity patterns that comprise a low-dimensional subspace — the manifold — within the larger possible high-dimensional neural space. The authors relate this discovery to skill learning and adaptation and I was able to appreciate these findings thanks to my Linear Algebra course!

The image is taken from a paper by Chaudari et al. that shows the underlying structure of neural data that corresponds to head direction (the direction the head is facing) is a one-dimensional ring. This suggests that only one variable is being encoded by this structure — the angular position of the head; source

Some Background and Terminology

Parts of your brain are working together right now to read this article. Let’s say that I put an electrode in your brain that is able to magically capture the activity of all the neurons you are using right now to read. You could visualise the data you are collecting in an n-dimensional vector space where each dimension (or axis) represents the activity of one neuron. If you collect the data of 100 neurons, this neural space would have 100 dimensions.

Neurons fire in relation to other neurons — they are not quite independent. These are due to inhibitory (negative) or excitatory (positive) relations between neurons. Two neurons that share an excitatory connection may fire together whereas if they shared an inhibitory connection — only one of those neurons would fire. In other words, neurons co-modulate. These co-modulations imply that some neural activity patterns are more likely than others. Additionally, some patterns may not occur at all for a given action. This motivates the idea of a subspace within the neural space where most or all of the neural activity patterns reside. The authors call this subspace the intrinsic manifold. We can think of the manifold and neural space as subspaces and spaces respectively as a point in there would be a result of linear combinations of multiple neuron activities!

These manifolds are thought to be task-dependent. This implies that the actual neural activity directly responsible for the task’s output — termed the Control Space — also resides within the Intrinsic Manifold. Refer to the diagram below for a simplified visualisation in a 3 dimensional neural space (suggesting that only 3 neurons’ activity is measured).

Diagram from the paper visualising co-modulation patterns and the Intrinsic Manifold within the Neural Space
Left: Visualisation of how co-modulation within neurons arise | Right: Geometric and Simplified representation of the Manifold lives within the Neural Space; source

What exactly is a manifold? The concept of manifolds come from the mathematical area of topology. A rigorous definition of a manifold is as follows: “A manifold is a topological space that locally resembles Euclidean space near each point.” To keep things simple, a topological space can be abstracted as the most general mathematical space and we will just focus on manifolds. The most common example of a manifold is a sphere which is not a Euclidean space since two points on the sphere cannot be connected by a straight line along the sphere (you would need a geodesic path). However, locally, you can approximate the sphere’s structure using a Euclidean plane — think about us experiencing the world as flat when the Earth is in reality, spherical. Coming back to our topic, the authors suggest that the constraints of neural connectivity cause neural activity to be confined to a low-dimensional manifold within the higher dimensional neural space. This geometric structure within the data could tell us a lot about the hidden complex secrets of neural activity.

Image of 2D manifolds
Examples of 2-Dimensional Manifolds within a larger space. Note that all of the surfaces of these structures can be represented with 2-Dimensional planes although they exist in a 3D space; Source

The experiment to verify a manifold and its significance

The authors of the paper predicted that the manifold dictated the ease or difficulty with which an animal can learn a new behaviour. They theorised that every action has an intrinsic manifold that is determined by the co-modulation patterns of the neurons involved in that action’s execution. Intuitively, if the task’s neural control was altered, re-learning would be required. They believed that if the neural control of a certain task was inside this manifold, learning would occur more readily than if the neural control was modified to be outside the manifold.

Setup

To test these hypotheses, the researches set up an experiment involving monkeys in a Brain Control Interface (BCI) paradigm. The monkeys learned how to use neural activity to move a cursor on a computer screen. The primary motor cortex (the region involved for the task) using a 96-channel microelectrode array (96 neurons’ activity was measured).

visualisation of experimental setup
Visualisation of Experiment setup. The BCI mapping refers to the relation between the neural activity and the cursor velocity in the task; Source

Manifold Identification

An intrinsic manifold was identified from the neural activity after a calibration period using dimensionality reduction techniques like Principal Component Analysis (PCA) and Factor Analysis on the neural activity. The output of these algorithms was a lower-dimension space that explained ‘where’ most (or all) of the data lies — the intrinsic manifold!

Experimental Manipulation

Since the hypothesis to verify was how the manifold affects task performance and/or learning, the authors manipulated the position of the control space within or outside the intrinsic manifold. Remember the control space is the space within the manifold that relates directly to the task’s output — in this case, cursor velocity. Rest of the space outside the control space can be thought of as relating to computations leading up to the task output.

Within-manifold perturbations were the first type of manipulations. These involved re-orienting the control space but maintaining it within the intrinsic manifold. On the other hand, outside-manifold perturbations re-oriented the control space and ‘moved’ it outside the intrinsic manifold. Within-manifold perturbations altered the neural space such that previous task-specific co-modulation patterns were now ‘mapped’ to the task output and outside-manifold perturbations resulted in co-modulation patterns that were not being activated for that particular task now being mapped to the task output. In other words, new co-modulation patterns have to be ‘generated’ for the task in outside-manifold perturbations whereas the same set of patterns were used for within-manifold perturbations. Both of these perturbations impaired monkey task performance. The researchers hypothesised that within-manifold perturbations were easier to bounce back from than their outside-manifold counterparts.

Visualisation of the perturbations
Visualisation of the two types of perturbations in the simplified neural space. The red arrow indications moving the control space within the manifold. The blue arrow indicates moving the control space outside the manifold. Source

Experiment Results

Both types of perturbations, as predicted, impaired task performance. However, they induced varying recoveries. Task performance was measured in success rate (proportion of times the task was completed successfully in a single trial) and the acquisition time (average time taken to complete the task in each trial).

Performances for both type of perturbations. The experiment was divided into three blocks — without perturbation, perturbation and without perturbation once again (demarcated by the dashed lines). The Circle region denotes initial performance after perturbation and the Star region denotes the best perturbed performance.

The WM perturbation resulted in slow recovery back to near-initial performance after the perturbation was inflicted on the monkey motor cortex. This suggests that re-learning occurred over time. This is further evidenced by the quick dip once the perturbation was removed. This implies that the monkey remapped the control space over time during the perturbation block and required a second (but seemingly easier remapping) once the perturbation was lifted.

The OM perturbation did not show much evidence of learning. Once the perturbation was introduced, there was no general trend of performance recovery over the perturbed trials. Additionally, once the perturbation was removed, there was a lack of impairment (unlike the WM results) — suggesting that the monkey did not effectively learn the new remapping. Interestingly, the initial perturbed performance was better for the OM task which could be attributed to the monkey brain realising that the new mapping is effectively out of reach and starting to rely on guessing right away. This, evidently, was not the case for the WM task as an effort to relearn was made from the beginning.

These results support the hypothesis that the manifold is a reliable predictor of learnability. In other words, ‘tasks within a manifold’ are easier to learn than tasks outside. The authors suggest that within-manifold perturbations illicit fast-timescale learning mechanisms like adaptations whereas outside-manifold perturbations may lead to slow-timescale mechanisms like skill training being used by the brain to relearn the task at hand.

What does any of this mean?

The authors set out to establish the means by which network constraints affects the learnability of a task and they successfully did so by establishing the existence and the significance of an intrinsic manifold. Tasks within the manifold were more readily learned than those outside.

These findings reinforce a popular observation that we are able to learn new skills that are related to the skills we already posses. We may just be generating similar neural patterns that we may already be commonly using! Skills that are quite distant from what we are familiar with require us to generate completely new neural co-modulation patterns.

How does this affect our understanding of Deep Learning?

The Artificial Neural Network (ANN) was inspired by animal neural networks and consequently, share similarities with its living counterparts. This could mean that the representations within an ANN could reside within a manifold rather than a n-dimensional space (where n is the number of neural units). The manifold hypothesis for artificial neural data could explain why some networks are better at generalising to input as well as developing side-effects. If taken into account during development, a CNN could be trained to perform well on image classification as well as object segmentation if its manifold is designed to be ‘wide’ enough. Additionally, adversaries could leverage the knowledge of a manifold and discover operations or perturbations, that when applied to a familiar (for the CNN) input image, ‘removes’ the image from the manifold — causing the CNN to misclassify the image it used to classify correctly previously. Even overcoming adversarial attacks may be possible through understanding the intrinsic manifolds of ANNs.

The idea of the neural manifold is a great example of how new establishments in Neuroscience have great potential to elevate the state of Deep Learning and further Artificial Intelligence.

--

--

Niranjan Rajesh
Bits and Neurons

Hey! I am a student at Ashoka interested in the intersection of computation and cognition. I write my thoughts on cool concepts and papers from this field.