Advancing neural data analysis with maximum entropy models

CDS’s Cristina Savin & co. explore how maximum entropy models can be used to determine regularities in neural data

New recording technologies mean we have more tools than ever to gather data about brain activity. This raises new challenges from the data analysis perspective, as new statistical methods are required to determine how the activity of large neural populations gives rise to brain function.

One particular challenge is to characterize the statistics of neural activity patterns, sometimes referred to as the ‘neural dictionary’.

It is well established that neural responses (described as binary vectors with ‘1’ for active neurons and ‘0’ for silent neurons) are variable but highly structured, with biological activity restricted to a small subset of the exponential number of possible patterns. Figuring out what drives these regularities is critical for understanding the neural code and how neural activity gives rise to behavior, but difficult in practice.

Neuroscientists usually turn to one of two approaches to address the problem, as Cristina Savin explains in her recent co-authored paper with Gaper Tkačik (IST Austria).

The first method is rooted in frequentist statistics and favored by experimentalists, and strives to “identify and report single, strong, salient signatures of neural computation.” The second method, however, is grounded in Bayesian statistics and often used by theorists, and seeks to “build a probabilistic model of the neural activity that predicts the probability of every possible activity configuration.”

Both approaches are powerful, but they also have their drawbacks. Is there a way to link the two together? For Savin and Tkačik, the answer is yes — if you use Maximum entropy models (MaxEnt).“MaxEnt models link the two approaches,” the researchers explain, “by being, at the same time, bona fide probabilistic models for neural activity, as well as generalizations of frequentist shuffles.”

Moreover, MaxEnt models can serve “as a baseline comparison to increasingly popular unsupervised models from machine learning.”

To read more about their work, click here.

by Cherrie Kwok

One clap, two clap, three clap, forty?

By clapping more or less, you can signal to us which stories really stand out.