Radiology,AI & Active Learning

Koa Labs
Published in
4 min readAug 7, 2023


I’d like to follow up my previous post about telehealth, AI & dermatology with some quick thoughts about AI, radiology, active learning and the need for pre-competitive big data sets in radiology.

This article in the NYT about a successful study in Sweden using AI for assisting radiologists in reading mammograms is a starting place for discussion. The results of the study are compelling with a few caveats as articulated on X by Dr. Robert O’Connor (copied at the bottom of this post). This study and Dr. O’Connors thoughtful takeaways highlight two of my core beliefs(inspired by the late great Marvin Minsky) wrt effective AI :

  1. It’s always about the machine & the human working together (on many dimensions)
  2. No algorithm is useful without enough great data

It’s early but my key takeaway from the study in Sweden is that the most compelling uses of AI in the short/medium term will be focused on AI systems that SUPPORT highly skilled humans — especially in highly skilled and highly technical tasks such as radiology. As discussed in my previous post — AI researchers have been at this for many decades — within the discipline mostly called expert systems.

AI and specifically AI for image recognition has advanced so quickly in the past few decades that machines are now able to effectively augment highly skilled humans in tasks that are routine for the highly skilled human and mundane for the machine. The key successful design pattern is what many like to call “machine driven, human guided”.

I expect to see many more of these types of systems being developed — especially in healthcare/medicine/life sciences by companies that are commercializing this core design pattern that integrates the human and the machine — one of my favorites is Valar Labs which is focused on tools to support Oncologists — often in radiology.

A key to dynamic in these systems/companies is that machines are serving the humans — in this case Oncologists — the best of these systems (imho) are being built with a strong sense of “machine humility” and belief that the machine should be focused on tasks for which it can be as effective or more effective than a human expert at large scale. Also these systems must use principles of confidence in AI to proactively determine when the machine is not sufficiently confident and proactively engage the appropriate human expert for validation/correction. The feedback from the human(s) can then be integrated back into the model/AI in what is traditionally called “active learning”. Once the feedback is recorded, the next time the machine encounters the same conditions it can improve it’s response with increasing minimal human involvement.

As Dr. O’Connor also points out — this study leads to a compelling case for building bigger data sets that can be used to improve overall effectiveness. More better data is almost always the bottleneck for most large scale AI systems — I’m hopeful that a large pre-competitive data set of radiological images can be developed over time to train models in the interest of a massive improvement of efficiency in radiology and empowerment of humans in radiology to help more patients faster & more effectively.

According to Dr. O’Connor in his posts on X :

‘So what did it show? Basically that an AI software tool used upfront in breast cancer screening analysis could help rank & prioritise mammograms for human analysis. In so doing increases detection rate and lowers manpower needs.

Why is this important

  1. Previous AI screening studies showed potential but humans were still better and more reliable. This is arguably first to show superiority for upfront deployment importance
  2. There is an enormous global shortage of people with the skills to produce (radiographers) & interpret mammograms (radiologists) So anything which improves their efficiency is important
  3. The study detected more early (stage 1) cancers (83% vs 78%). That’s really important as stage I #BreastCancer is almost always curable. More stage 1 = more cure with less treatment =significant societal benefit.

Are there any issues?

Yes- unfortunately. The system also detected a greater rate of in situ BC (25% vs 19%). In situ is a complex area as some of these will never go on to cause a life threatening cancer. Hence potential for over treatment although this may not be case.

Also, the study duration was short so (this may seem strange) we don’t know if it will have saved more lives — the real test of a screening process. Won’t know for perhaps a decade or so.

Will this revolutionise immediately screening?

Well no. 1) More research will be needed to validate the finding and explore impact on treatment & mortality. Ultimately any system will need to save more lives &/or improve efficiency to be of real use

Plus #AI tools are only as good as the “training sets” -the info used by the model to make it work. Breast cancer can present slightly differently in diff ethnicities & at different ages so validation in more countries will be vital

For some women (younger / high genetic risk) digital mammography (x-Ray), upon which the system works, may not be the best technology and MRI (which the system knows nothing about) may be superior

Also, the data privacy aspects of undertaking this kind of work are exceedingly complex -big private firms need large amounts of publicly collected screening data to train & examine their systems. In many EU countries, inc Ireland there are real challenges with this.

What’s the take home? Really important & positive study as it validates the principle that #AI might improve efficiency & make better use of scarce expertise but a lot more work will need to be done before we can be confident that it will save more lives

Importantly, studies like this will help invigorate & get investment in an expensive & complicated research area that has arguable been “underwhelming”. Leading to more success.’



Koa Labs

Located in the heart of Harvard Square, Koa Labs is a Seed Fund for promising start-ups.