How to ‘crack the code’ of the developing brain?

Paul Matusz
Jan 23 · 10 min read

Development across lifespan is a complex process and depends on many mechanisms and factors. This places multivariate computational modelling in the perfect place as a central analytic tool to study developmental change.

How to ‘crack the code’ of the developing brain?

Many questions in the field of developmental neuroscience focus on the temporal dynamics of relationship between the brain, behavior and sociodemographic variables. What connects different computational approaches is that they aim to extract relationships and differences within the data based on the regularities within them, while minimizing the, often subjective and so biased, decisions that the experimenter can make throughout the analysis. This post introduces some computational approaches used in developmental cognitive neuroscience with summaries of example research papers.

The two main branches of machine learning are supervised and unsupervised learning. Among the unsupervised learning methods, the most commonly used is clustering. Clustering puts together data based on their features or properties. Among the supervised learning methods, classifiers are most commonly used and they sort data by similarity (or variability, depending on the desired outcome). But what happens when the data has too many dimensions and simple clustering and classifying algorithms cannot tame the data? This is where factor analysis shines. Factor analysis is used to reduce the dimension of the data before applying clustering of classifying algorithms, for more robust results. This dimension reduction is useful when scientists are interested in using a smaller number of variables to explain or predict a certain phenomenon.

Ok, enough introduction about computational methods, let’s try to understand how they fit into neurocognitive development questions. Since in the development of the brain and cognition, there are many types of data that can be collected, we picked a few papers that include EEG (electroencephalography) and f/MRI (functional/ magnetic resonance imaging), the two most commonly used neuroimaging methods. Now let’s talk about a few studies where computational modeling of the data obtained with EEG and f/MRI provided novel insights into how the brains and cognition develops. We will first start with an example on how classification can be used to better understand development. Second, we will describe how individual differences in lifespan development can be taken into account in novel ways and which methods are well-suited for this purpose. Lastly, we will describe structural equation modeling (SEM) and how it can be used to integrate different types of brain data to explain development and skill learning.

(1) Studying development with clustering approaches

Difficulties in learning and attention seriously impact the lives of many children. A research team in Cambridge, UK, has recently shown how clustering algorithms can help to better understand these difficulties. In one study (Bathelt et al., 2018, JAACAP), the researchers tested over 400 children that were diagnosed with a variety of difficulties with attention, learning, and/or memory, and measured their higher executive functions as well as brain structure (using structural and diffusion-weighted techniques). As there are persistent problems in diagnosing cognitive and learning disorders in children, the authors used a battery of tests measuring executive functioning to group — or cluster — children struggling at school by their similarities in their behavioral measures. Subggroups are defined by grouping individuals most similar while being as distinct as possible from other subgroups. Clustering algorithms do require some a priori assumptions (e.g. about geometric properties of the shape of the cluster). To identify the necessary parameters, used an advance in the network science that is community detection.

The community detection algorithm focuses on using mathematical tools to quantify organisation of networks and relationships between their nodes. The authors took each child’s cognitive function profile as a node in a network, and with each iteration, the network was subdivided into communities of nodes. This resulted in a new, purely data-driven grouping system based on cognitive problems: group 1 was characterized by inattention/hyperactivity and impulsivity, group 2 had learning difficulties, and group 3 had conduct and peer relationship problems. What is even more interesting is that they found that children’s brain structure measurements also fit this grouping — large brain networks previously implicated in cognitive and behavioral regulation differed along the three communities identified by the clustering algorithm. Care needs to be warranted as such network analyses are not sensitive to severity of different cognitive problems.

The Astle team (Astle et al., 2019, Dev Sci) aimed to tackled this in another study where they tested over 500 “struggling learners” on a battery of cognitive and learning tasks but this time they analysed it using another unsupervised learning approach. Here they used a type of artificial neural network called Self Organising Maps (SOMs), which projects complex input data onto a two-dimensional representational grid of nodes called a “map”. In this study, each child was a node, and the map represented their profiles. The closer the children were on the map, the more similar their profiles (for an example of this see Figure 3 in the paper). By combining SOMs with clustering, they identified four main groups were identified that included 1) those with broad cognitive deficits, 2) those with age-appropriate cognitive profiles, 3) those with working memory problems, and 4) those with phonological problems. Differences were equally found in white matter connections between the SOM‐defined groups.

The identified groups did not pattern with children’s formal diagnosis, suggesting that machine learning approaches can help tackle persisting problems with diagnosis cognitive and learning difficulties. Currently, diagnosing both learning difficulties, like dyslexia, or attentional difficulties, like ADHD, relies on how well clinicians can classify behavioral symptoms. The problem is that every child’s symptoms are different, and symptoms of one disorder often go with symptoms from another. There is also little knowledge on how these symptoms are related to the differences in brain structure or function, which one can expect to provide the neural bases for different types of disorders. Clustering algorithms that focus on identifying dissociable patterns in data show promise in disentangling complex symptom profiles in children and linking them with brain mechanisms. In this way, machine learning can help us better understand the brain and cognitive bases of developmental disorders and so hopefully their eventual treatment.

(2) Studying development with classification approaches

Studying development can be particularly challenging in the case of lifespan studies. There, age-related changes can be confounded by large within-group heterogeneity, especially when measuring data from individuals across different development stages (e.g. children to older adults). This challenge can be tackled through computational modeling, as the latter can group participants based on between-subject variability within the overall sample. However, it is the creation of person-specific models that is potentially is critical, as these can account for high variability in the data, which makes these models especially useful for younger groups. Karch et al. conducted a study on the age-based changes in the mechanisms governing working memory and visual selective attention across school-aged children, young adults and older adults (Karch et al. 2015). That is, EEG data were used to discriminate different levels of working memory load and the focus of visual attention, but here the authors used a multivariate pattern classification and focused on creating and testing person-specific models of EEG correlates of the two tested cognitive processes. To evaluate each candidate model, Karch et al. used a metric called balanced accuracy (BAC), which is a loss function that accounts for unbalanced target variables (often found in EEG data) and represents the average of the accuracies obtained for each target variable’s state. The higher the BAC, the better discriminability in brain responses between two experimental conditions, and thus the better the person-specific model. The authors showed that in all 3 age groups these person-specific models were more accurate than classical person-nonspecific models used in classical EEG data analysis. Notably, the between-person variance observed by Karch et al. (2015) was smaller in older than younger adults. This is an important finding that runs contrary to the increased variability both in behavioral and neural measures reported in older adults.

(3) Studying development with structural equation modeling

In contrast to classification and clustering approaches, factor-based analyses try to reduce a variety of measured variables to a smaller set of latent factors. Latent factor models these days are typically part of SEM that as an approach combines the strengths of latent variable modelling and of path modelling. Path modelling is an extension of multiple regression models that simultaneously estimates multiple hypothesis about relationships, including directions of causal relationship and specify the constructs that act both as dependent and independent variables. In this context, the relationships are identified between processes that are derived from multiple tasks, assumed to contribute a different component or portion of variance to “cover”, or explain in full, a given construct. There are clear advantages of SEM in the study of development due its flexibility as a framework for multivariate analyses. First, the SEM forces the researchers to formulate an explicit model of relationships within the data, which model is then compared to the observed data (e.g, via a covariance matrix). The data either fits, and thus confirms the model, or falsifies it, necessitating proposal or formulation of a different model. Relatedly, SEM forces to identify the assumptions that may be in the data but are less explicitly formulated in other approaches (e.g. equal variances). Second, SEM allows to account for measurement error in the observed scores. This increases the chance to detect genuine relationships but also renders research designs more generalizable and so more valid.

By using SEM, one can incorporate both brain regions and brain networks to the study of brain-behavior associations in a single model. This is important as there are different ways in which one can analyze functional MRI data. One can look at small parts of the brain, like single voxels or regions of the brain, or at larger parts of the brain, like networks, which include different brain regions. Although fMRI data is easier to analyze when the brain is divided into smaller parts, we know that such brain regions do not work in isolation and that they do interact to give rise to complex cognitive processes. As such, the current, brain-region-specific analysis techniques may not be optimal for providing accurate models of the brain-behavior associations.

The SEM allows researchers to estimate the unique contributions of either specific brain regions or brain networks to, for example, working memory, arithmetic skills or creating relations between objects or ideas etc. Using this approach, Bolt and colleagues (2018) found that there were different brain correlates that explained the behavioral responses in the 3 mentioned tasks. For working memory and the relational task, the activation of large brain networks (here frontal-parietal network), rather than their constituent regions-of-interests alone, explained better how well these tasks were performed. In contrast, for the arithmetic task, activity within the specific brain region (here temporal-parietal junction) was more related to how well the task was performed. Thus, SEM analyses can reveal novel insights into the brain-behavioral links that can increase the efficiency of future work.

Notably, the use of SEM requires deep knowledge of the hierarchical functional organization of the human brain, as it is a hypothesis-driven approach. Therefore, the exclusion of one or another variable (a priori thought to be non-important) will have important implications for interpretation of the results and generalizability of the developed model. Thus, other data-driven approaches may be more appropriate when the research question is rather exploratory. There are also nontrivial challenges in SEM related to fitting the model to the data. The most common way to estimate the parameters of the SEM is the maximum likelihood approach. It is also important to establish if there is measurement invariance in the model, as it may cause incongruent conclusions about the tested latent variables. At the same time, SEM is not sensitive to missing data, which often occurs in cognitive neuroscience studies In this case Full Information Maximum Likelihood (FIML) can be applied to estimate models on full dataset including missing data. As SEM is more complex than a t-test, it is possible that the model will not converge (i.e. there are too many observations that too poorly fit the model). Also, power and sample size to address the research question in a SEM study must be computed carefully. As always, a larger sample size leads to more robust and generalizable results. Even if evidence can be inferred from a model fitting a larger dataset, it might not be enough to validate hypotheses.

Conclusion

To summarize, computational models help to grasp and account for the complexity of the myriad processes across the brain and behavior that contribute to the age-dependent change. One of the main take home messages is that the complexity in the measured data may be oftentimes most accurately captured through by identifying patterns that form in the data, while trying to minimise the number of assumptions about it, as shown by grouping or classification. In contrast, in other approaches, like the SEM, the choice of the particular variables is crucial to the obtained results. It is possible that the two computational approaches may be most optimally used in a cyclical fashion. Where computational approaches may be particularly useful in understanding development-related changes is the clinics. Some of the studies described here have laid the foundations for systematic research in understanding how time-dependent change across the brain and behavioral or cognitive measures contribute to emergence of different developmental or learning disorders.

We hope this post helped you navigate through the rapidly evolving field of computational modeling applied to cognitive neuroscience and that you can now choose the most appropriate method for your research question.

If you read this far, and you liked this post, tweet to the authors (Paul Matusz, Nora Turoman, Lora Fanda, Cristina Simon-Martinez and Antoine Widmer). If you’re interested in what our group is doing in the context of computational modelling to better understand the development of the brain and cognition, have a look at the website of our amblyopia project on Medgift group’s website — http://medgift.hevs.ch/wordpress/projects/gamb/ — or for even more info, at the site of our GROWN group — https://groupforrealworldneuroscience.wordpress.com/. And stay tuned for future posts on advances in cognitive neuroscience!

research at medgift

Sharing concepts, algorithms and research topics in our lab

    Paul Matusz

    Written by

    Cognitive neuroscientist interested in attention, education & brain rehabilitation | Head of GROWN @ Medgift HES-SO Valais | GROWN’s page | @paulmatusz

    research at medgift

    Sharing concepts, algorithms and research topics in our lab

    Welcome to a place where words matter. On Medium, smart voices and original ideas take center stage - with no ads in sight. Watch
    Follow all the topics you care about, and we’ll deliver the best stories for you to your homepage and inbox. Explore
    Get unlimited access to the best stories on Medium — and support writers while you’re at it. Just $5/month. Upgrade