Artificial Intelligence for Imaging course and workshop, organised by the D-Lab group from the University of Maastricht, held from the 11th — 14th December 2019 was insightful and hands-on. The AI4Imaging course covered recent advances in radiomics and artificial intelligence and how it can be applied to medical images. This article will provide a brief overview of key oral presentations, social events, and hackathon.
Day 1 — Radiomics
On the first day, the workshop was officially opened by professor Phillipe Lambin who welcomed the participants and introduced the course. My take away from Phillipe’s introductory talk is radiomics can either be hand-crafted or deep learning-based, and that radiomics is an emerging field that translates medical images into quantitative data to provide biological information to enable phenotypic profiling for diagnosis, theragnosis (how to treat) and follow-up.
Next, was the talk by Wim Vos who highlighted the opportunities of applying radiomics in pharma research. I found the concept of applying imaging analysis (using AI) in drug development intriguing. My favourite talk of the workshop was the talk titled “The Image Biomarker Standardisation Initiative” delivered by Alex Zwanenburg-Bezemer who highlighted the importance of standards in radiomics and how they can help standardise general processing schemes for calculations of image biomarkers from imaging and set reporting guidelines for studies performing radiomics analyses.
The final talk for day 1 was focussed on the pitfalls in radiomics by Mathieu Hatt, this talk raised an important question, do we even need segmentation? In his talk Mathieu showed two models; one built using radiomics features extracted from the segmented volume of interest (VOI) and one were the VOI interest was not segmented, they found that both models achieved comparable prognostic power.
After lunch, the hands-on practical workshop kicked off, which was divided into two strands for scientists and beginners (mostly clinicians). I attended the scientist practical session where we were given to perform two tasks; the first task was to build a classifier to identify contrast-enhanced from non-contrast-enhanced lung CT scans using radiomics features extracted from the GTV. The last task involved “Building a radiomics classifier that distinguishes between benign and malignant lesions will certainly improve clinical handling of suspicious lesions, and minimize the need for invasive procedures".
After a busy day filled with talks and hands-on coding sessions, we set off for the breath-taking Maastricht Christmas Market held at Vrijthof square for a bockwurst and gluhwein.
Day 2 — Deep learning
Thursday started with a recap of day 1 by Henry Woodruff. Thereafter, Bram van Ginnecken delivered the first talk of the day, highlighting the success stories of deep learning for imaging. Followed by talks; Deep learning and radiomics by Joe Deasy, Ensemble, challenges by Bjoern Menze and deep learning in a clinical workflow by Andrew Maidment.
The takeaway messages from day 2;
1) Deep learning is critical to enable auto-segmentation and automated response tracking.
2) Deep learning will not replace clinicians but work hand-in-hand with clinicians to aid clinical decisions. However, as an optimist who whole-heartedly believes in machines, I say “Time will tell”
In the afternoon, we got back to the practical sessions. The task for the day was to train a model for classification of breast tumours.
After another busy day of hard work, we were treated to a fancy 3-course dinner as part of the social programme, held at the Thiessen Wijnkoopers.
Day 3 — Data
The third day of the workshop was focussed on Data, touching on important aspects such as patient selection and synthetic data. The talk by Andrew Maidment on synthetic data was favourite talk of the day. This talk highlighted the importance of simulated data and how they can overcome the following problems:
- Lack of large curated dataset.
- Unknown ground truth.
- Contaminated data
For transfer learning, pre-training a model on synthetic medical data is better than using images of cats and dogs because the predictive task is more closely aligned to the final task.
On Friday afternoon, the hackathon started. The aim of the hackathon was to develop a radiomics based model to classify lung tumours using CT images.
Day 4 — Multicentric distributed learning
The final day was focused on distributed learning. Distributed learning is a technique that enables two or more institutions to share data without the data leaving the institution, using machine learning models to process requests on datasets jointly distributed across a network, returning the results to the requesting hospital, yet, keeping patient’ data anonymous. Samir Barakat from ONCO Radiomics presented a live demo, demonstrating how a machine learning model can be trained using datasets from two different hospitals without the data leaving the hospitals.
In conclusion, radiomics is a promising emerging field with a lot of challenges. Thus, workshops such as AI4Imaging are important for setting up communities to help overcome some of the challenges facing radiomics. I would highly recommend the AI4Imaging course and workshop. The course provides a unique opportunity to learn, share and network with leading-edge practitioners in the field of Quantitative Medical Imaging. Hands up to the D-Lab team for putting together a great course and workshop. And most importantly for their hospitality.