COMPARATIVE CONNECTOMIC STUDIES AMONG SPECIES

INTERVIEW WITH SAHIL LOOMBA

Albane le Tournoulx de la Villegeorges
WEBKNOSSOS
5 min readJul 10, 2023

--

In 2022, Sahil Loomba et al. conducted a study using volume electron microscopy (EM) to compare synaptic connectivity in the mouse, macaque, and human cortex (published in Science, 2022). They found that, despite a greater number of cells in the human brain, the synaptic connectivity in the human cortex is much sparser than in the other species. Additionally, they found that the human cortex has three times more interneurons than the mouse cortex. However, this does not cause a commensurate increase in excitation and inhibition balance on pyramidal (excitatory) neurons across the species. Instead, these interneurons contribute to an order of magnitude increase in the interneuron-to-inteneuron connectivity. This discovery has had a big impact on neuroscience, opening the door for more research into why these changes exist.

Sahil Loomba, postdoc at MPI Brain

Sahil Loomba is a post-doctoral research scientist at the Max Planck Institute for Brain Research headed by Moritz Helmstaedter. In this interview, he will discuss the research’s origins, the encountered challenges, the tools and techniques employed, how Voxelytics and WEBKNOSSOS supported his research, and his perspective on the future development of comparative connectomics across not only species, but also ages and disease models.

How did your research on connectomic comparison start?

During my PhD at MPI (Max Planck Institute), I had the chance to be in the right place at the right time. A neuroscientist (Jakob Straehle, who was senior postdoc at the time and is now neurosurgeon at the Klinik für Neurochirurgie in Freiburg) had started a pipeline for acquiring volume electron microscopy (EM) datasets across different species, exploring surgical techniques and tissue extraction methods to ensure minimal damage during the process. This was especially relevant for the human tissue: a resected tissue acquired in collaboration with the department of neurosurgery at the TU Munich Clinic. Together, we processed and manually annotated datasets of both human and macaque brains. Jakob’s work on these very first surgical human tissue samples showed us that in order to avoid artifacts in the data, several conditions such as temperature need to be tightly controlled. This was the base for our comparative study, in which we continued to acquire more datasets of the three species: mouse, macaque, and human.

How many datasets did you acquire?

When you perform surgery to extract tissue for volume EM microscopy, the goal is typically to preserve as many samples as possible. For instance, in the case of human tissue, we extracted a dozen samples per patient across temporal and frontal cortices, but ultimately used only 2 of them to acquire EM datasets. In total, we processed 9 datasets: 5 from mice, 2 from a macaque individual and 2 from 2 different human patients. Just to provide you with an overview of the analysis scope: each processed dataset ranges in size from 200 GB to 1 TB, with an average of approximately 700,000 synapses. In fact, one of the mouse datasets even had 1.5 million synapses. You can have a look at the datasets here.

Neuron and connectome reconstructions of Mouse, Macaque, and Human cortex from electron microscopy, Loomba et al., Science 2022. Animation by scalable minds.

Which challenges have you encountered?

As I mentioned earlier, we obtained several small tissue samples to select the highest quality ones. Initially, our intention was to compare differences in cortical layer 4 and layer 2/3 across species. However, due to the low number of datasets available in layer 4, we decided to shift our focus to layer 2/3, where we could scale up the number of datasets across species. This is particularly important for drawing conclusive arguments in comparative studies across species, as it helps control for differences attributed to factors such as age, brain areas, and individuals.

Another limitation we face is that lab mice have limited experiences throughout their lifespan. However, this is not the case for macaques involved in physiological studies, and certainly not for human patients. This raises questions about how experiences influence brain structure and how much this contributes to the connectomic differences. For now, it stays an open question.

Can you tell us more about the connectome generation?

It used to take us at least 3 to 6 months to align and segment an EM dataset. In 2021, we reached out to scalable minds to help us develop a pipeline that would make this process more efficient. Since we needed to analyze a larger number of datasets, going from 3 to 9, a significant acceleration was needed. With Voxelytics, a machine learning pipeline developed by scalable minds, we were able to generate a segmentation from an aligned dataset in a matter of days. This accelerated workflow enabled us to focus on the biological analysis much sooner than previously possible.

What exactly is Voxelytics and how did it help you for your research?

Voxelytics is an extremely efficient pipeline for analyzing volume EM datasets, from dataset alignment to connectome generation. The machine learning models have been developed and trained over the past years by scalable minds, in close collaboration with MPI. Recently, we learned how to use Voxelytics in our own lab, which gives us a lot of flexibility. The interface is user-friendly and intuitive. On the left side of the screen, a diagram describes the pipeline steps, where each step produces results that feed into the next steps. On the right side, you will find further information as well as link to WEBKNOSSOS are provided. By clicking on the link, we can directly access the dataset with the task results and evaluate them.

Examples of tasks conducted with Voxelytics and their results visible in WEBKNOSSOS.

Typically, I don’t even look at the intermediate results. I simply run the Voxelytics pipeline on a new dataset and review the results approximately two days later. If the results don’t appear to make any sense, we occasionally generate a ground truth using WEBKNOSSOS to estimate the error rates.

How did WEBKNOSSOS support your work? Did you use other tools?

We use WEBKNOSSOS a lot for data visualization, annotation, and result evaluation. It is our primary tool for volume annotations, for example for the generation of ground truth data. We also use WEBKNOSSOS for manually exploring the dataset to get first insights by measuring specific properties such as cell-type distribution, spine and shaft synapse densities of dendrites and axon target properties.

I had a previous experience working with Neuroglancer for visualizing and tracing axons. However, its interface was different, and I ended up manually clicking on synapses and pasting the data into an Excel sheet. Luckily, the integration of WEBKNOSSOS with Neuroglancer now enables direct data streaming. This allows me to continue working with the familiar and preferred tool.

What are the next steps to continue your research?

We have demonstrated that a 2–3 time increase of interneurons leads to a 10-fold increase in interneuron-interneuron connectivity. This calls for more experiments to dig into what the consequences of this change are and which interneuron sub-types contribute to it.

Now, we started to work on identifying the interneuron cell types across species. Once again, we use volume EM data to examine different types of interneurons, by quantifying their rate of synaptic connectivity onto soma, dendrites, and so on. We already got started with Voxelytics and, together with scalable minds, we will speed up the pipeline to specifically answer this question.

--

--