Highlights from Neuromatch 4.0 Conference

Jan 5 · 5 min read

A few interesting papers andposters at the Neuromatch 4 Conference (held in December 2021), with topics at the intersection of Neuroscience and AI.

  1. How Do Dendritic Properties Impact Neural Computation

In the first talk by authors Ilena Jones and Konrad Kording, dendritic processes (forming a tree-like structure) are used to solve machine learning tasks as opposed to simple connection weights. It’s interesting to see how modelling dendrites and synapses with constraints can solve machine learning tasks.

2. Model dimensionality scales with the performance of deep learning models for biological vision

In this paper, authors Eric Elmoznino and Michael Bonner discuss how scaling the manifold dimensionality increases the performance for vision related tasks, as opposed to various theories that the brain tries to represent information in a really compressed format.

3. Visualization and Graph Exploration of the Fruit Fly Brain Datasets with NeuroNLP++

NeuralNLP++ is an improvement over NeuralNLP which uses rule-based NLP methods for searching various queries related to Fruit Fly Brain. This is not helpful since questions can be of any form and can be of any form besides the restricted usage. In this presentation, Dense Passage Retrieval and PubMedBERT are utilized to query a collection of ontology terms and their descriptions. This enables visualization of these queries in the 3D brain animation.

4. A large scale computational model of visual word recognition and its comparison to MEG data

In this talk, Marijn van Vliet presents a baseline model to map onto the neural signatures observed during neuroimaging studies. He presents how neural activity in the brain regions respond similarly to the deep layers of a convolutional-based network and presents it as a baseline.

5.NMC4: Mehdi Orouji-Distilling low-dimensional representations of data using TRACE model

In this talk, authors Mehdi Orouji and Megan Peters propose a task relevant, non-parametric model called TRACE (Task-relevant Autoencoder via Classifier Enhancement). Using TRACE, the authors show how to create a non-parametric, task relevant model which can encode more task relevant features in its bottleneck layer encoding. In terms of outperforming reconstruction performance, TRACE is superior to vanilla autoencoders .

6.NMC4 — Hung Lo — Binge eating suppresses flavor representations in the mouse olfactory cortex

In this talk, Hung Lo tests the following hypothesis: does binge eating show a causal relationship with neuronal activity in the olfactory cortex? As evidence, the author shows how inhibitory neurons exhibit similar neuronal activity after repeated trials, and how serotonin production from the dorsal raphe neurons cannot be responsible for suppression phenomena.

7.Do rats see like us? The importance of contrast features in rat vision — YouTube

Anna Elizabeth Schnell and Hans Op de Beeck consider whether contrast features are enough to explain facial recognition tasks in mice.

8.Rank similarity filters for computationally-efficient machine learning on high dimensional data

This talk presents a new technique which is essentially a non-parametric neural network of a single layer. This is similar to how an SVM is an approximation of an MLP. It uses the concept of filters: each filter captures some information from N dimensional inputs, while its weights are decided by using rank or “confusion” techniques. Using this method, one can transform any dataset into a linearly separable form. The resulting classifier can also be used to reduce the dimensionality of data and leads to a 10x performance enhancement over traditional SVMs and KNNs. This enhancement includes maintaining accuracy across large datasets.

9.An investigation of the relationship between extroversion and brain laterality

Based on data from previous research data, authors Amirreza Farbakhsh and Tarannom Taghavi show that there is a relationship between extroversion and laterality of the brain. Laterality refers to how cognitive functions such as tasks of language acquisition, articulation or motor control, or emotional control originate in either the right or left hemisphere of the brain. The authors measure laterality [1] as the average over the difference in voxel activation between two hemispheres. They used the t-test along with NEO-FFI scores to find correlations to their laterality scores.

10. Where is all the non-linearity? (Video not Available)

The author proposes the RNN PSID (Recurrent Neural Network Preferential Subspace Identification) model. RNN PSIDs use the latent states from neuronal activations for behavioral decoding purposes, and also is helpful for better causal decoding. The non-linearity of the RNN parameters is sufficient in capturing such non-linear dynamics from LFP and spiking signals.

11. Mouse SST and VIP Interneurons in V1

As someone who has worked on analyzing V1 before[2], I was especially interested in this work. The authors inquire whether SST or VIP interneurons respond more strongly to novel visual stimuli. They use statistical ML techniques with 8-fold cross validation to predict the stimulus response.

12. Universal Theory of Switching

This poster is interesting for anyone who finds concepts of attention mechanisms, Genetic algorithms interesting. In this poster the authors present an interesting idea as to how, switching mechanisms are responsible for improvements in biological organisms as well as in simulated beings. They take this idea to an extreme by implementing “switching” within an algorithm (Inter algorithmic switching) to between two diff algorithms (Intra algorithmic switching).


Before you leave make sure to follow me on Twitter as @agrover112.

Be sure to follow @Orthogonal_lab for more content on topics such as Computational Neuroscience, Artificial Intelligence, and more!

Orthogonal Research and Education Lab

Updates & news from OREL, a distributed lab at the nexus of computer science, bio, + neuro & AI.