Interview with Carles Bosch on Correlative Multimodal Imaging

Albane le Tournoulx de la Villegeorges
WEBKNOSSOS
Published in
5 min readJul 11, 2022

Correlative multimodal imaging is a powerful technique to study how the mammalian brain works. It allows to generate maps carrying more detail where it is needed and to jump between scales and levels of details — useful both in the analysis of already existing datasets as well as to explore new territories. However, combining the streams of data provided by different microscopy techniques is also a technical challenge.

Carles Bosch is the principal laboratory research scientist working with Andreas Schaefer at the Sensory Circuits and Neurotechnology Laboratory of the Francis Crick Institute. In this interview, he will share with us how he and his team are handling these difficulties in their research on glomerular column circuits in the mouse olfactory bulb.

Carles Bosch (principal laboratory research scientist at the Francis Crick Institute)

Can you roughly describe the project you are working on?

We want to understand how the neuronal circuits for olfaction work, and to do so we study both their structure and their function by combining imaging techniques.

When we look at the world through different lenses, we get complementary insights. Similarly, if you image the same thing with different microscopy techniques, you obtain different pictures that complement each other.

In the first paper we just published, we put together seven different “cameras”, imaging the same brain. These seven imaging modalities combined provide a more complete picture of the structure and function of the same brain circuit.

In the second paper we explore how to generate correlative maps more reliably and robustly. In order to have good quality maps, we refined the staining protocols and measured the accuracy at which the distinct pictures could be mapped to each other, making sure the different layers would overlap correctly in the final map.

What would be the next step in your research?

From now on, we will use those maps to study how the brain recognizes odors. We present about 50 different odors to a mouse and record how neurons within the same local circuitry get activated in response to them.

The mouse olfactory bulb contains the first place where the sensory neurons relay their activity to other neurons, so we study the very early stages of that sensory pathway.

Can you tell us more about these different imaging modalities?

Sure.

  • We start imaging a region of mouse brain tissue with in-vivo 2-photon microscopy and record its physiological function. At this point, we not only acquire a functional but also the first anatomical map.
  • This region gets dissected and imaged again with bright-field light microscopy and 2-photon microscopy, providing another anatomical map. This step helps determine whether the in vivo-imaged region is in the actually dissected sample.
  • After having stained the tissue sample, dehydrated it, and embedded it in resin, we use lab and synchrotron X-ray tomography (LXRT and SXRT). The latter allows for a 10-fold resolution improvement compared to the former: with SXRT, the nuclei, blood vessels, and apical dendrites become visible. In the SXRT acquired datasets, we can trace apical dendrites of neurons for up to 200 µm. This can enable tracing the input-output excitatory backbone of the glomerular columns in the olfactory bulb. And since the technique is non-destructive, we can continue working with the same sample.
  • The last step is to image the tissue sample with serial block-face electron microscopy. This technique provides a very valuable insight: the contacts between the neurons — the synapses.

Which tools did you use to visualize and annotate your data? How did they help you to tackle the challenges along with your study?

We rely on a varied set of tools. These live in the Fiji, MATLAB, and Python software ecosystems, as well as Amira and Imaris when it comes to rendering some 3D scenes for illustrative purposes. A crucial tool for us was webKnossos. It has proved essential because it tackles some critical functions:

  • First, we work with volume datasets that can be data-heavy. Handling them locally might be possible if you have a powerful computer, but even so, it might be very slow and not straightforward. webKnossos is optimized for displaying and annotating these large datasets.
  • Second, our work is intrinsically collaborative. With webKnossos, you don’t need to send your colleagues the dataset(s), or to work on the same computer anymore. You can simply work simultaneously on the same dataset, from two different computers.
  • And third, robustness: we might analyze one specific dataset for different purposes and some selected datasets might be useful for years. Having them stored in webKnossos is a good way of having the data safely stored and organized.

Can you tell us more about how you correlated the datasets and how the tools you used supported this process?

Overlapping the maps I was talking about earlier implies moving, rotating, and scaling data from different modalities in 3D space. The datasets are warped onto one another based on conserved landmarks (e.g. the contour of a glomerulus, unique blood vessel branchings, etc.) which we manually annotate using Fiji’s BigWarp tool.

Then, the analysis involves annotating the datasets — meaning tracing features of interest that are resolved by the different imaging modalities. Ideally, you want to draw your annotations in the original state of a dataset — before any warping is applied — and then be able to relate to each other the annotations done onto different datasets. So, the goal is to draw in each dataset and warp the annotated points onto the target dataset.

Same brain tissue sample imaged with lasers, X-rays, and electrons to address both its subcellular structure and its physiological function. Read more

Once you find the complete chain of transformations happening from one dataset to the next one, you can easily draw in the in-vivo light microscopy dataset and transform the annotation to match the ex-vivo electron microscopy dataset. And by doing so, merge insights about the structure and function of the neuronal circuit.

We integrated the warping engine of BigWarp with the webKnossos MATLAB library by Motta et al into a single MATLAB environment, which we named ‘warpAnnotations’. This environment can read annotations, warp them into any target space previously listed, and save the annotations in a standard webKnossos format so they can be imported back into your webKnossos scene. This tool made annotating and warping a very intuitive process and allowed us to quickly visualize and iterate on the correlated images.

--

--