From image to information — 10 questions for QMENTA*

Jasmin Wachter
QMENTA Tech Blog
Published in
7 min readNov 15, 2018

--

QMENTA is a start-up which supports customers in neuroscience and neuroradiology in analyzing image datasets with the support of their platform. In this interview, Christoph Barmet, the CEO of Skope, wanted to know the challenges of aggregating and analyzing data from the perspective of a young company which is building its business in the MRI ecosystem. David Moreno, who leads the Neuroimaging Team at QMENTA, answered Christoph’s questions.

*This post was originally published on Skope’s blog

Christoph Barmet: What is QMENTA providing?

David Moreno: In a nutshell, QMENTA offers an AI-powered cloud-based platform to streamline the imaging workflow for clinical trials and clinical research in order to accelerate the discovery and development of new treatments for neurological diseases. The QMENTA platform was designed as an integrated environment for users to easily collaborate on projects in neurological diseases. We help investigators and doctors to aggregate data from different sources, whether it be 1 site in 1 country, or 20 sites in 10 countries, and to analyze imaging records with our scalable cloud infrastructure. Our processing algorithms enable investigators to quantify changes in and damages to the brain, to follow a longitudinal study, and to analyze trends in cohorts, thus supporting the decision-making on the disease diagnostics and prognosis in clinical research or during a clinical trial.

CB: What are the challenges in your image analysis work?

DM: One of the main challenges we face is adapting these different standards and tools to be able to offer an easy-to-use platform, so that our users only need to worry about uploading the data, starting their desired analysis and studying the results. On our platform, images are automatically classified, which allows for an easy and seamless execution of biomarker analysis. In our automated pipelines, the outputs of a tool serve as input for the next tool. However, it is challenging to fit all inputs and outputs of the different tools, such as open access tools, proprietary QMENTA tools, or 3rd party tools to an internal standard so that they may be used as inputs or outputs of other tools and enable complete workflows.

CB: Image datasets are typically not standardized. What issues does this cause?

DM: A standardized project with well-organized and homogeneous data is always desired. However, it is not always possible. Particularly, in retrospective projects, where data has been acquired at different centers, with different scanners and imaging protocols. Traditionally, this would mean a project quality nightmare given not all of the subjects will have the same number of files, the same modalities and definitely not the same filenames. This makes automation and scripting difficult and typically requires a lot of manual curation work. The quality control of the platform allows identifying outliers and non-compliant subjects with very little effort, so that they may be reacquired or excluded from a project. Internally, our algorithms are prepared to work with a reasonable range of image resolutions and quality levels — although of course, the better the quality of the input data, the better the quality and reliability of the results. So, it is always important to work with carefully acquired, high-quality data whenever possible.

CB: MR images have a contrast that is not quantitative, i.e., the grey scale value of an individual voxel has no direct physical correlate. Would an improved homogeneity of the datasets make analysis work easier and more powerful?

DM: Yes, homogeneity of MR images is actually a hot topic. As you have mentioned, the intensity values which you get out of standard MRI sequences are not absolute. Therefore you cannot compare intensity values directly. However, there are some advanced sequences that give interpretable voxel values with a direct physical correlate. We are mostly using the MR images to derive information that is not based on absolute values such as volumetry, which relies on relative contrast change. Additionally, by applying preprocessing techniques such as denoising and intensity-correction, we compensate for this lack of homogenization. But it has also been proven that the differences between the MR scanners introduce variation. Hence, when using data from multiple scanners, the differences in volume you need to detect must be bigger in order to be able to see a significant effect. If the homogenization is improved, it will indeed reduce that kind of variation/noise-level and will make all of the measurements more accurate, and we would be able to measure more minute changes.

CB: You have mentioned the two aspects volume and contrast. Is it correct that volume measurements are more important to you than the interpretation of contrast?

DM: Yes, this is generally the case for many of the applications we are involved in. There are two types of properties of the brain which are commonly measured using MRI. For example, we measure macroscopic geometric properties such as volume. Extracting volumetric quantification depends entirely on a good tissue to tissue contrast to delineate boundaries. Hence, sequences which offer the best contrast are used, but the absolute voxel values are irrelevant and arbitrary. Other types of images, referred to as ‘quantitative images’ actually do measure absolute values of physical parameters relating to microscopic tissue properties, such as iron concentration or cell density. In these cases, the voxel value has a physical meaning, includig diffusion-weighted imaging, T1 mapping or R2* mapping. Both types of quantification are important for different applications, but volumetry is one of the most commonly used tools in many CNS trials.

CB: Another aspect of imaging is accuracy. MR images can be inaccurate for many reasons. Which are the artifacts you struggle with the most?

DM: The most common artifacts are a cut field of view, wrap-around, intensity gradients from B0 field inhomogeneity and Gibbs ringing artifacts or ghosting. In general, artifacts should always be avoided. For a reliable result, it is crucial to set up a good sequence protocol and have the right procedure to take the image, e.g., to avoid motion. Yet I would argue that not all of them pose a real problem. Minor artifacts can usually be dealt with through our preprocessing techniques.

CB: You say that a minor image artifact is acceptable. Do you have favorites within different artifact?

DM: When measuring the geometry of the brain, artifacts that are mistaken by the algorithm for an actual brain structure are very difficult to correct: a Gibbs ringing line could be taken for the interface between the brain and the cranium that is actually not there or an overlap caused by aliasing where the nose is coming into the occipital lobe. Other types of artifacts can be dealt with more easily: inhomogeneity of the magnetic field strength, magnetic susceptibility distortions, or the ground level of noise in the image are examples for this.

CB: How much do you benefit from high resolution? Are there applications where you are missing structures due to the fact that the resolution was not better?

DM: It depends on what you would like to achieve in terms of the trade-off between accuracy and time invested in the analysis. E.g., when taking anatomical images, usually a resolution of 1 mm isotropic is good if using a 3T scanner. If you would like to go higher, the measures will be more accurate. This is definitely possible on research scanners, but the acquisition time and the computation cost will increase. In cases where the resolution is higher than 2–3 mm, the measurements will be less reliable and you need a higher reference value for comparison. In other images, e.g., in diffusion imaging using a 3T scanner, a resolution between 1 mm and 2 mm is good for typical whole-brain connectivity measurements.

However, it is true that a higher resolution helps you to detect and quantify structures such as small grey-matter nuclei. As computing power keeps increasing, we will also be able to handle higher resolution imaging data in less time. In the end, it truly depends on the application and objective. If you want to do a whole-brain analysis, going too high in the resolution might not give you the advantage in terms of the time you have to invest. If you want to focus on a specific region or study the connection between two different regions, higher resolution can be a clear advantage.

CB: How would you define the ideal MR image?

DM: There is no one ideal image to acquire, as the optimal imaging parameters depend on the type of scanner available, on the type of subjects, the purpose of the study, disease, and biomarker to be analyzed. However, there are two important desired qualities to all imaging acquisitions: the absence of imaging artifacts and high isotropic resolution. While traditionally clinical images were only meant for 2D visual inspection by a clinician or radiologist and typically have high resolution in the viewing plane, biomarker algorithms work at their best with 3D images. For example, 1 mm is ideal for anatomical images taken on a 3T scanner and 2 mm for diffusion images, although we can handle lower resolution data as well.

CB: Is it a typical case for you to analyze data from one subject where you look at data that was acquired over several sessions over a period of a few months or even years?

DM: It is a frequent case for us. We deal with the longitudinal change of MS lesions or longitudinal change of the volume, e.g., see if there is atrophy occurring. In these cases, the more consistent the image datasets are, the better and more reliable the results can be. This means optimally having the same machine with the same parameters making the images. We can work with images from different machines with different parameters as we can adjust for that, but the reliability will always be higher for homogeneous datasets.

--

--