Dissecting Digital Health — with Luke Oakden-Rayner
Could you teach an AI to know the difference between a cat and a dog? Luke Oakden-Rayner can teach you how. He taught himself advanced computer science and math so that he could apply his medicine and radiology training to a PhD on deep learning applied to medical images. He’s not even too depressed to know that the technology he is developing will contribute to radiologists losing 90% of their work!
This is the full transcript of the podcast Dissecting Digital Health with Dr Louise Schaper, interview with Luke Oakden-Rayner, PhD candidate & Radiologist.
Guest: Luke Oakden-Rayner, radiologist, PhD candidate and blogger
Host: Dr Louise Schaper, HISA
Tweet Louise @louise_schaper Tweet Luke @DrLukeOR
Production: This podcast is produced by Ivan Juric
Show Notes
[01:23] Opening remarks by host Dr Louise Schaper
[1:35] Luke Oakden-Rayner introduces himself as a radiologist, doctor, and PhD student working with deep learning. Luke talks about his start in radiology and how a research project led him to deep learning. Luke discusses his path to radiology and the barriers he faced with getting into informatics.
[5:05] Luke shares how he found himself first writing code for research publications during his time at the School of Computer Science at the University of Adelaide. He talks about his self-taught knowledge and the open nature of the computer science community playing a big role in his ability to learn.
[10:18] Louise returns the discussion back to Luke’s PhD and asks him to share his progress. The two end up talking about the deep passion needed to pursue a PhD, and Luke talks about why he found himself doing a PhD, despite it never being part of his career plans.
[11:19] Louise asks Luke to share more about what led him to his research topic and he goes on to explain his work in deep learning — particularly in medical imaging. He talks about the lack of success seen in deep learning within medicine. Louise asks Luke to simplify the differences between AI, machine learning, and deep learning.
[15:50] Luke is excited about deep learning. He discusses what deep learning advances will eventually mean for society. The two discuss the complexity of transferring human judging capabilities to non-human intelligence.
[20:20] Louise brings the conversation about computer advances to its relevance in Luke’s research. Luke delves deeper into the purpose of deep learning in medical image analysis for making diagnoses. Luke shares an interesting example of one his projects which deals with predicting mortality.
[23:42] Luke tells Louise about the diversity of the team he is part of, in terms of knowledge backgrounds.
[24:30] Louise asks Luke to share where the information they work with comes from. Luke talks about the openness of the computer science community and the speed at which discoveries are made in comparison to those made in medicine. He shares the advantages that come from the quick, cost-effective nature of the research and literature produced within computer science that medicine and other disciplines lack.
[28:30] Louise takes the discussion to big picture issues and whether AI and its subsets will put radiologists out of work. Luke talks about the time-frame it will take for computers to replace radiologists and gives examples of other instances where computers have already replaced human labour. He also talks about the other challenges, obstacles, and potential consequences presented by advances in deep learning.
[34:42] Luke touches on the technological developments that are impacting on radiology.
[37:02] Louise closes the conversation on the exciting prospects in radiology, and thanks Luke for sharing his knowledge and insights.
Full Transcript
Opening Remarks by host Dr Louise Schaper
[01:23] Louise: Welcome to Dissecting Digital Health. I’m your host, Louise Schaper, and today, I’m actually broadcasting live from the HISA office in North Melbourne, and I’m sitting with…guest, would you like to introduce yourself?
Luke Oakden-Rayner introduces himself as a radiologist, doctor, and PhD student working with deep learning. Luke talks about his start in radiology and how a research project led him to deep learning. Luke discusses his path to radiology and the barriers he faced with getting into informatics.
[01:34] Luke: Yeah, I’m Luke Oakden-Rayner. I’m a radiologist, a doctor, and I’m currently doing a PhD working with deep learning, looking at medical images in particular.
[01:48] Louise: When did this journey start for you, Luke?
[01:50] Luke: Well, I mean, obviously, I’ve been doing medicine for a while.
[01:54] Louise: That takes a while. Did you always want to be a doctor?
[01:56] Luke: I didn’t, actually. I kind of was one of those people that felt like they wanted to keep their options open, and my friends were going through the EMAT test and I decided to go along with them, essentially, and I was the only one that got in. So, that was fun. But, no, so I did radiology, and yeah, that took a long time, but just as I was getting to the end of it, we had to do a bit of a research project, and I’d always been interested in technology, and I decided that I wanted to sort of push it a bit further. Most of the research projects maybe aren’t that high-end that get done at that stage, but I wanted to try to do something a bit more exciting, and I mean, it started at a less-exciting place than it is now, but it kind of snowballed, and I ended up doing deep learning, and yeah, it’s just come from there. So, that was sort of three or four years ago now that I started, and here I am now.
[02:52] Louise: And what was the less-exciting part where it started?
[02:54] Luke: So, to start with, I didn’t have much background experience in computer science, in general, and I mean, informatics at all. I had never written a line of code for example, I wasn’t even thinking about that. I knew deep learning was a thing at that stage, it had already been around for a couple of years and it was making all of these big headlines in technology, and even then, it looked clear it was going to be really important in medicine, but I just never thought I’d be able to do it myself. It just seemed there was this big barrier to getting into it, particularly with my skill set at the time, and so I thought I’d take the more kind of conservative statistical path.
There was some research around at the time about something called radiomics, which is kind of like this big data approach to radiology image interpretation, kind of a bit like genomics, but for radiology. All of these papers had come out that had shown certain ways of getting image features from images, so extracting patterns and things like that, could predict diseases that we hadn’t been able to really interpret as humans before, so things like assessing even the molecular make-up of a cancer, for example, just by doing things that, by deep learning standards are kind of old hat, but at the time, there was a lot of interest and some big publications coming out, and I thought I might take things like that and try and work out if we could assess human health in a more general way. These days, they’re starting to talk a lot more about assessing health in this setting of frailty, about people being weaker than normal, stronger than normal, more robust than normal, and using that as a marker of how they’re going to do with a variety of treatments and things like that.
Luke shares how he found himself first writing code for research publications during his time at the School of Computer Science at the University of Adelaide. He talks about his self-taught knowledge and the open nature of the computer science community playing a big role in his ability to learn.
[04:49] Louise: So, it really started with the math side of things for you?
[05:03] Luke: That was my interest, yeah. It was definitely where I wanted to go, but like I said, I didn’t think I had the skills, and so I decided to try and just take prognostic scores, things like people’s bone mineral density for example, things that we already knew how to do, coronary artery calcium, and things like that, and just trying to plug them together into kind of a model for overall health. I initially went to the guy who is now my primary PhD supervisor, Professor Lyle Palmer — he has just come back from Ontario and was now in Adelaide, and he really liked the idea, said we should run with it. From there, it started building from that point, and eventually I went to the School of Computer Science at the University of Adelaide, saying, “Maybe I don’t have enough skills for this yet, but we’ve got some really interesting ideas,” and eventually got connected with another of my supervisors, Gustavo Carneiro, who is a deep learning expert in medical image analysis, and at some point in that process, decided that maybe I could upskill myself, and over a period of a year or two, it got to the point where I was writing the code myself for research publications. It was really fun from going from not thinking I could do it at all and sort of staying away from the whole deep learning thing, to just doing it myself.
[06:30] Louise: And how did you learn that, then? What was the process? YouTube videos?
[06:35] Luke: Yeah, pretty much.
[06:36] Louise: Oh, really? [Light laughter]
[06:37] Luke: Yeah [Light laughter]. I mean,
one of the great things about computer science is it’s a very open community compared to say, medicine, where it’s still very locked in that ‘bricks and mortar’ education, and it’s a bit more closed…you don’t freely give out your preliminary results and things like that, By comparison, computer science is just this open sharing community.
They’ve got these really amazing online courses. This idea of MOOCs, massive open online courses, I’ve written a blog post recently — I’ve done dozens to maybe a bit over 100 of these MOOCs to varying levels of completion.
[07:16] Louise: Yeah, I saw your blog this morning, what was it, something like — you’re really proud of all the MOOCs you’ve dropped out of?
[07:20] Luke: Yeah, exactly. Yeah, so I mean, one of the great things about MOOCs is you can take as much as you want from them. You’ve put in no money, you’ve put in no…you don’t have a GPA to get out of it or anything like that, and so I’ve started probably 100 MOOCs. I’ve completed maybe a dozen, and most of it’s somewhere in between that, so I’ve usually watched at least a couple of videos, done a couple of assignments in each MOOC, but then decided I’ve seen enough of this one, maybe I’ll skip to week eight and see what was exciting in week eight or something like that. So, initially, my first MOOC would have been an introduction to programming, like introduction to Python, or something like that. There’s some really good basic starter MOOCs like that. From there, I did some machine learning and biostatistics MOOCs, and then finished up with the classic deep learning MOOC, which everyone would recommend for image analysis, which is a Stanford course called CS231n, and worked through that, and
really by the end of that course you’re pretty capable of doing applied deep learning.
From there it was…at that point, you’re really ready to start reading research papers. So, from there, went on to do that, and now I’m doing research.
[08:42] Louise: Yeah, and we’ll talk about your journey and what you’re working on as well, but since you’re talking about the MOOCs, I guess that’s really great career advice for people as well. If you want to learn something, there’s actually a wealth of opportunity to do so.
[08:55] Luke: Oh, I mean, it’s fantastic.
It’s free, freely available, you can do it in your own time.
The catalyst of me wanting to upskill like that was I just finished my barrier exams for radiology, and I mean, these exams are kind of massively traumatising, study for 18 months solid to do these exams kind of thing, and I came to the end of it, and people go two ways, they keep being really intense about this study, or they just stop studying entirely at that point because they’re over it. I didn’t really want to lose my skills. I was more disciplined than I’ve ever been, I was better at studying than I’d ever been, and it was kind if just like I could see this fading away within a couple weeks, and I’ll be back to baseline, and I’d always wanted to do computer science in some way. I’d never done it before, decided that that was the right time to do it, and yeah, I just started doing these MOOCs at that stage, and it really was just a really smooth flow on to just keep studying, essentially.
[9:54] Louise: Well, actually, with these podcasts, we’ll put some of the transcript online, and I’ll get some links from you, from your blog as well.
[10:02] Luke: Yeah, for sure.
[10:03] Louise: So people who are interested in checking out some of the things you studied and that you recommend.
[10:07 Luke: Yeah, for sure, yeah, yeah.
Louise returns the discussion back to Luke’s PhD and asks him to share his progress. The two end up talking about the deep passion needed to pursue a PhD, and Luke talks about why he found himself doing a PhD, despite it never being part of his career plans.
[10:08] Louise: Alright, cool. Now back onto…okay, so you self-taught yourself, and now whereabouts are you in your PhD studies?
[10:17] Luke: I mean, it’s a bit hard to say, in a sense.
[10:20] Louise: Oh, sorry, no stress. Don’t break out in a sweat, it’s fine. [Laughter] I remember those days of people going, “When are you going to finish that PhD?” It’s never a nice question.
[10:30] Luke: Yeah, it’s a bit confusing for me, because I’m enrolled part-time. I’ve got family responsibilities at home, so I’m technically working part-time, and that means I’ve got eight years to do this PhD, and I’m only a year into it, but I’m looking like I’m more on track to finish in four years. So, I’m probably about a quarter of the way through, something like that, but yeah, it’s a bit up in the air at the moment. It depends what else happens in my life, whether I keep going at this pace or I slow down at some point.
[11:00] Louise: One of the things I find when you’re talking to people who are considering doing a PhD, whether it’s in theoretical physics or art or something, anything really, you have to choose something that you’re passionate about. Would you agree? Because life gets in the way, and it’s messy, and it’s difficult.
[11:17] Luke: Yeah, absolutely. For me, I never even considered doing a PhD, essentially. In radiology, almost no one does them. We don’t have that culture of doing higher degrees, and it certainly doesn’t improve my job prospects in radiology or anything like that. Our job is clinical, for the vast majority. But, for me, I’d started this work — essentially, when I started, I intended to have it finished during my training, and so I was going to have a one-year project, I was going to have it done, and that would’ve been a cool thing to have done during my training, but like I said, it kind of snowballed from there, and I just decided I was going to keep doing this research. At one point, Lyle, who is my primary supervisor now, just said, “Well, you’re going to be doing this work anyway. Do you want to get a PhD out of it?” and so it was kind of like the idea and the sort of drive to do it came before I even considered it was going to be a PhD, and in some sense, I would be doing this work anyway, I may as well get a degree out of it, essentially. It’s just a bit of extra paperwork I have to do occasionally.
Louise asks Luke to share more about what led him to his research topic and he goes on to explain his work in deep learning — particularly in medical imaging. He talks about the lack of success seen in deep learning within medicine. Louise asks Luke to simplify the differences between AI, machine learning, and deep learning.
[12:26] Louise: So tell us about your topic and how you landed on that topic, as well.
[12:32] Luke: So, I’m doing…essentially, I’m working on deep learning in medical imaging, how to make it fit.
There’s been a lot of success in deep learning in most of the technology world, but so far, we haven’t had at least many major breakthroughs in medicine — there have been a couple of papers that have come out in the last about four months, but before that, we hadn’t had any major breakthroughs — and, by major breakthroughs I mean
deep learning is reaching the point where it’s as good or better than humans at a variety of tasks, particularly perceptual tasks, and we hadn’t seen that in medicine yet.
We didn’t have computer systems equalling or beating doctors. And so, my work has essentially morphed into why is that and how can we overcome some of those challenges? The specific tasks that I’m doing — I mean, obviously, that’s a very broad topic, and it doesn’t really have a specific focus in terms of medicine.
[13:34] Louise: Well, maybe — sorry to interrupt, but I’m just thinking, I should have asked you a question before that. So, for those of us in podcast land who might not understand the difference between artificial intelligence, machine learning, and deep learning, is there a simple way of describing that for the uninitiated?
[13:53] Luke: Yeah. I mean, I guess it’s a pretty common question, so there’s been a lot of blog posts and stuff written about it. I think
the simplest way to understand it is artificial intelligence is the largest bubble of the Venn diagram. It’s trying to get computers to solve problems in a way that roughly looks intelligent, however you define that. Machine learning is a sub-type of that, so if the other major group of artificial intelligence is what we’d call logic-based systems, or kind of rule based artificial intelligence, kind of like you follow a flow chart of decisions and you get to an outcome that’s kind of intelligent, so machine learning is the other way to do that, and you somehow give a system a data set to train on, and it learns how to do something itself.
So, that’s kind of the sort of half of artificial intelligence, is machine learning, and it’s certainly the most successful part at the moment by a large margin, although a lot of systems incorporate bits of both.
Deep learning is a subset of machine learning, so I guess machine learning…almost all statistical models, and modelling we do, falls under the umbrella of machine learning, so even just fitting a linear model to a data set, for example, that is a machine learning task, but deep learning is this sort of extension of that that has largely come to light in the last six or so years. People have been working on what we call neural networks for a long time, which is the underlying technology, but there was a big breakthrough in 2011/2012 when it really became the most successful way to do machine learning, particularly for perceptual tasks. So, yeah, I mean, it’s really just a subset of a subset of artificial intelligence.
[15:45] Louise: Okay, but that’s how the hierarchy sort of goes down.
[115:46] Luke: Yeah.
Luke is excited about deep learning. He discusses what deep learning advances will eventually mean for society. The two discuss the complexity of transferring human judging capabilities to non-human intelligence.
[15:47] Louise: And, what’s with the excitement around deep learning?
[15:50] Luke: Yeah, so, I mean, there’s lots of really good reasons to be excited about deep learning. So, the major problem with trying to get computer systems to do human things hasn’t really been the nuts and bolts of making decisions, making decisions quickly, things like that, because a lot of those can be codified in these kind of rules, the decisions that you can just kind of make a flow chart out of, so it can kind of just sort of fall to the right answer, and you know, computers are obviously very good at things like calculation, the very well-defined answer type questions, but for the first time
deep learning has meant that computers can do perceptual tasks, and so things like looking at the world, listening to the world, understanding the world in a much more human sense,
and it’s much more fuzzy as well, that the answers aren’t perfectly defined, but they’re kind of defined well enough in a statistical sense that you get to the right answer within a margin of error and it works really, really well.
[16:54] Louise: Well, when I first met you, you were giving a lecture that I was just glued to. My mind was not wandering off at all, and one of the examples that you gave was how to you teach a computer to know the difference between a dog and a cat, and I’ve often used that example when I’ve thought about it myself and explained it to other people as best as I can, which is nowhere near as well as you can explain it [Light laughter], because you think, “Ah, of course! Oh…I’ve got a tail…” like, it’s really quite challenging to think of how you would do that, and yet somehow, an infant, baby, a human baby, actually understands the difference between a dog and a cat.
[17:32] Luke: Yeah, absolutely. I mean, I’m actually going to use that example again in the lecture I’m giving today.
[17:36] Louise: Oh, great! I like it [Light laughter].
[17:38] Luke: Yeah. No, it’s absolutely true. It’s one of these strange things that when I say deep learning does these perceptual tasks really well, people don’t actually really understand how amazing that is. It’s just something that we take so much for granted, that you can listen to the world and you can understand what you’re hearing, or you can look at the world and you see people and things like that. The best way I can describe it is that example. It’s a really clear way to understand it, that
when a computer sees the world, it sees pixels. It sees a matrix of numbers that is the photograph that it sees, for example, and each one of those numbers is just the intensity of a single pixel,
and so you could have, in a five-megapixel photo, you have five million pixels that it’s trying to work out what combination of these pixels means there is a dog or a cat present, and any one of those pixels has almost no relation to that answer, right? So, you have to tell it to look for some sort of pattern in the image, but even when you’re trying to put a pattern into words, let alone doing the math behind how to tell a computer to look for it, when you say to a human adult, “How do you tell the difference between a cat and a dog? Can you give me a few features, a set of features, that perfectly differentiates between the two groups, regardless of age, size, breed, colour, all of those things? Can you put in words the difference that perfectly divides cats and dogs?” I mean, I kind of think it’s an impossible task. There’s no real way a human can put that into words. I mean, the other example I give in the talk is cars versus vans, and it’s like, they share almost all the same features, and you can have big cars and you can have small vans, and you have cars with high roofs and all those sort of things. But, maybe not an infant for cars and vans, but like a five-year-old at least would never get these tasks wrong. So,
that’s where deep learning came from, the idea…the entire concept came from trying to replicate things we knew about how the human visual system worked.
And, yeah, it’s ended up being incredibly successful at doing that, so now computer systems can tell the difference between dogs and cats, and in fact, if you go into sort of sub-breeds and things like that, that humans often don’t have the background knowledge to do particularly well, these systems outperform humans. So,
they get significantly less wrong than humans, with maybe some caveats, so they’re doing better at it. It’s been incredibly successful in doing that.
Louise brings the conversation about computer advances to its relevance in Luke’s research. Luke delves deeper into the purpose of deep learning in medical image analysis for making diagnoses. Luke shares an interesting example of one his projects which deals with predicting mortality.
[20:18] Louise: Excellent. And are these the types of issues you’re looking at with your research? Back to your research now [Laughter].
[20:24] Luke: Yeah, sure [Light laughter].
[20:25] Louise: You were saying it’s quite broad, so yeah, let’s go back to the topics we were talking about.
[20:30] Luke: Yeah, I guess the idea with medical image analysis is that it’s a predominantly perceptual task, so we look at, say, an x-ray, and we have to decide whether there’s a disease present or not. So, that’s the exact same kind of thing as decided whether there’s a dog present or a cat present, or nothing present, and so, I guess now the idea is that we can potentially automate a large number of these tasks. There’s a huge number of ways you can go about that, and so the classic way would be this idea of computer-aided diagnosis, or I guess computer-automated diagnosis in this sense, where
you just teach a system to recognise all of the different diagnoses that could appear in an image, and it can then replace radiologists, or a pathologist, for example.
We’re not there yet by a pretty large margin.
What I do — what the initial part of my work is focused on is doing a slightly different, potentially easier, task, which is trying to…rather than focusing on diagnosis, trying to focus on outcome, and the reason I do that is that outcome data is much better defined. So, when you look at, say an x-ray, and you ask what’s present, you ask three different radiologists, you will get some variation on the answer, and having this kind of noisy label, we call it…so, the answer has an element of variation in it, depending on who you ask, means that the system doesn’t quite know what to learn.
[22:03] Louise: So what’s an example of the outcome, then?
[22:06] Luke: So, the major part of the project we’ve been doing so far is actually predicting mortality. The initial part of that was predicting mortality in healthy, as well as we could define it, over 50-year-olds, so the idea then was whatever the system can learn about these images would relate to the presence of undiagnosed chronic disease, for example, and so this is very much like the idea of frailty, that it’s assessing what we would call the health phenotype of the patient, just everything that’s going on in that patient’s genetics and their life. So, one that maybe has never presented because they don’t have symptoms, but we can detect, because we’re actually looking inside their body. And, we got some pretty good results at doing that. We initially started with a pretty small data set, just as a proof of concept. We’ve scaled up since then, which has its own challenges as well. So, now we’re working in sort of the thousands of cases, space, working with CT chests in particular, and again, getting better results. So,
deep learning systems, the more data the better, essentially, the results are improving as we’d expected them to.
Luke tells Louise about the diversity of the team he is part of, in terms of knowledge backgrounds.
[23:12] Louise: Oh, fantastic, and you keep saying “we,” too, so you’re on this journey not just with your supervisors, are there other PhD students looking at other aspects?
[23:22] Luke: So, we don’t have…I mean, the group I work with is made up of my PhD supervisors, so I’ve got kind of public health, medicine, and computer science all in that group.
[23:36] Louise: Did they work together before they took on you?
[23:28] Luke: No, no, so they all came together around this project, essentially. But, on the computer science side, I work with a group, that’s in Adelaide, Queensland, and Portugal, and we all work together on my project, in particular, but they all have their own students, as well, and so there’s a whole bunch of related work that’s always going on, and there’s often overlap between what we’re doing. Like I said, sort of at the start, a lot of what I’m doing is seeing…finding where the challenges are in doing these things and trying to overcome them, so there’s a lot of cross-talk between what we’re doing. So, it ends up being probably 10 or 15 people we’re usually in contact with, but a core group of sort of three or four, and I’m the only student in that. The rest are associate professors or professors.
Louise asks Luke to share where the information they work with comes from. Luke talks about the openness of the computer science community and the speed at which discoveries are made in comparison to those made in medicine. He shares the advantages that come from the quick, cost-effective nature of the research and literature produced within computer science that medicine and other disciplines lack.
[24:29] Louise: Alright, cool. So, in a field that changes quickly, and even if it stays relatively still for a while, there can be a major breakthrough, does that mean that you have to pay even more attention to the literature and where do you get that information from, because something could be released that would really impact the work that you’re doing?
[24:53] Luke: Yeah, absolutely. I think, like I said before, that computer sciences is really an open kind of research community. It’s really remarkable coming from medicine into computer science to see it, because in medicine, most of the really impactful projects are probably a decade in the making, right? That you do your initial phases of trials or something, and then you get to a Phase III trial, and that takes five years, a lot of money and a lot of effort, and it’s usually a big consortium across the world. In computer science, the big breakthroughs come from single teams doing pieces of work for three months, and they put them up freely online before it’s been published in a sort of pre-print server, like Archive or something like that. Usually by the time it gets published for a conference or proceedings, it’s usually obsolete by two or three cycles.
It’s so amazing how quickly they iterate in research in computer science, particularly in deep learning at the moment.
So, yeah, every week there’s usually a couple of papers that have at least some impact on what I’m doing, so you very much have to keep on top of it. It’s not so easy, but there’s some ways you can do it.
[26:08] Louise: I had no idea, actually. There’s a big global push for science of all descriptions to be more open, and not locked behind fire walls. So, I didn’t realise that computer science was leading the journey so much in that regard, for open access or open data.
[26:28] Luke: Yeah, it definitely is. They’ve got some advantages that make it easier for them. Like I said, Phase III trials are incredibly expensive, whereas
a breakthrough in computer science can be one or two people working for a month or two, and so you’ve got this huge cost differential there.
If you put your Phase III trials up on a pre-print server and someone else is right on the verge of publishing it and they publish right before you, that makes a pretty big difference to how impactful your results are. If you’ve only put two months of work into something and it’s only been two people doing it, there’s much less risk to being pipped at the post. I mean, I can see why the medical research community is the way it is.
[27:13] Louise: It’s also the way funding is structured around grants, it’s not encouraging of collaboration.
[27:18] Luke: I mean, the other thing is medicine is so much bigger, so if we had a really open sharing sort of environment, and people were just publishing stuff as it came, even publishing parts of research just as it came up, we’d be so completely inundated. I think…I can’t remember exactly what it was, I looked this up for a blog post I did a little while ago, but I think
medicine, as a research output, is as big as every other part of science put together, and probably significantly bigger, I think, actually. I can’t remember the exact stats. Same, the amount of money that goes into it is likewise a very similar kind of proportion, and so you’re getting millions of publications per year in medicine,
whereas until sort of six or seven years ago, a lot of these conferences in machine learning or deep learning had dozens of people at them. It’s certainly changed now, they’re getting sort of tens of thousands at these conferences, but that was the culture, that’s where it came from. It was a very small community, and they shared because they all knew each other. You can’t know everyone in medicine. So, I can see why there’s cultural barriers there.
Louise takes the discussion to big picture issues and whether AI and its subsets will put radiologists out of work. Luke talks about the time-frame it will take for computers to replace radiologists and gives examples of other instances where computers have already replaced human labour. He also talks about the other challenges, obstacles, and potential consequences presented by advances in deep learning.
[28:28] Louise: That’s interesting. Alright, well, let’s talk about some of the big picture issues. So, you mentioned before, so I’ll touch on that then, will artificial intelligence and the subsets of that put radiologists like you out of work? What do you see for the future?
[28:43] Luke: Yeah, again, this is something that I’m going to be talking about tonight, so hopefully I’ve got a pre-prepared answer that works quite well.
[28:50] Louise: [Laughter] Yeah, so Luke is lecturing at the University of Melbourne tonight with me. Well, anything you can rehearse here on the podcast, the students won’t hear it until after.
[29:02] Luke: So, I mean, the way I try and bring it up in the lecture is that there’s this professor called Jeffrey Hinton who is one of the biggest names in computer science. He’s one of the biggest people in deep learning in general. He’s called, actually, one of the godfathers of deep learning — there’s three of them that are kind of the top of the field — and he said last year…I’ll try and remember the exact quote.
We should stop training radiologists now, because it’s completely obvious that within five years, computers will be better than them
— something along those lines. When he talks, people should listen. He’s a very, very switched on guy, and has had a lot of experience not only with the theoretical side, but with applying things.
[29:47] Louise: Okay, but it’s not all a dystopian future is it?
[29:49] Luke: No, I mean, that’s a really big call, and to be honest, talking to people, other computer scientists, in person, there’s a lot of scepticism around that kind of claim. My personal opinion is I guess that it’s probably going to happen, but I’m not quite as optimistic about the timeframe, or pessimistic, considering it’s my job. But, yeah, so
it’s certainly true that deep learning has been around as a breakthrough technology for about six years, and in that time, we’ve seen visual recognition, speech recognition, self-driving cars, complex game learning, all of these things be overtaken by computers. Before that time period, all of these things were decades away from being solved.
[30:38] Louise: Yeah, well even the self-driving car people, even when they started the projects they didn’t think they would make the advances as quickly as they have.
[30:46] Luke: No, absolutely. I mean, it’s been…every month or two, there’s something that shocks deep learning researchers. It’s just something that you didn’t expect to happen for at least a couple of years, and this is already knowing deep learning exists. Coming from before that time, there was this huge change that happened.
It’s absolutely true for certain medical tasks, say, radiology, pathology, parts of dermatology, for example, they are primarily visual tasks, and these systems excel at visual tasks.
That’s what they’ve essentially developed from, and they’ve sort of snuck out into speech recognition and things like that, but visual tasks is essentially where they started and what they’re really, really good at. The question is are there additional barriers that make it harder to do medicine than it is to recognise cats and dogs, for example. I think there are. I think they’re solvable barriers to some extent, but I think it’s probably going to push us a bit further out. The other issue is regulation, so that’s always going to add time onto translation from research to clinical practice. I don’t know…I think if you wanted to sort of pin me down on it, I’d say in five years, we’re not going to be obsolete, and we’re probably going to barely feel it at all, that radiologists won’t be noticing their job disappearing in the next five years. If you make it 10 years, 20 years, and probably by 30 years, I really think most of these visual tasks we will be seeing pretty significant changes in how the workforce exists.
[32:18] Louise: So, we’re certainly looking at a generational change then. Does that mean, though…it doesn’t mean the jobs go away, though. Does it mean they change?
[32:27] Luke: Yeah, I mean, I think there’s always this argument…actually, I’m kind of criticising this argument in the talk. There’s this argument that we’ll work out other things we can do. So, for example, in radiology, radiologists often do procedural work, which is hands-on, sort of mini-surgery for people who haven’t seen radiology procedures, and certainly if we have visual systems, it’s not going to take over procedural work. So, people say
technology has always displaced human labour, but always, historically, we’ve worked out new things to do, and we haven’t had mass unemployment, for example.
I’m not sure I necessarily agree with that base argument to start with, because, for example, we have had mass unemployment in the farming sector, for example, where the vast majority of people used to farm and then in a very short space of time, a huge number couldn’t farm, and there was huge social unrest because of that.
We’re having a very rapid change in these visual, perceptual fields, and the speed may overcome our ability to develop new tasks.
The other issue is for things like radiology, the vast majority of our work is visual, and so even if we had all of these procedural tasks left to do, it may be that we lose 90% of our work. If you lose 90% of your work, you don’t need 90% of your radiologists, and I mean, that’s a really big deal. So, I don’t really fall down on the side of “it’ll all be alright.” Something is going to happen, it really probably depends how quickly it happens. There are other things that can be done, as long as you have enough time to sort of re-train your workforce in a sense, but at some point, we’re going to reach a position where there isn’t radiology work left as we currently think about it.
Luke touches on the technological developments that are impacting on radiology.
[34:20] Louise: What are some technologies that you’ve seen, either in real life or just online, that really excite you about the future, in terms of radiology? I mean, 3D scans, instead of looking at a 2D image, these types of technologies?
[34:42] Luke: Yeah, I mean, obviously, I’m going to say image analysis. That’s the thing that excites me most at the moment. I think that’s because it doesn’t rely on the underlying technology. Whatever underlying image quality you have, if there’s something to learn in that image, these systems will potentially be able to learn it.
[35:04] Louise: Yeah. I mean, certainly, you think about the amount of people that don’t have high-speed internet in Australia, in rural communities especially, actually even a couple of my staff members, if it’s a cloudy day in the blue mountains, can’t actually video into the office…well, that’s what they tell me, anyway. So, the idea is that, like you said, even if we’ve got lower quality images, but the image analysis can still work, and that the AI behind that can make a huge difference to people’s diagnoses, and therefore prognosis, I guess.
[35:37] Luke: Yeah, absolutely. I mean, I guess on that point, a lot of these systems, at the moment, you have to downscale your images quite a lot compared to what we’d call “clinical standard,” and we’re still getting results that are very good. The quality that we rely on in clinical practice may not be required in general — we’re not quite sure yet. There have been some papers that say we do need to maintain high-resolution, but it’s entirely possible we could get by with significantly less. I guess the real…the kind of world changing part of this is the efficiency and cost, the efficiency can go up so much and the cost can go down so much, so I mean, currently, a radiologist can view 100 images a day, maybe a bit more, maybe a bit less, depending on what they’re doing.
Once you have a trained deep learning system, each one of those images can be viewed in under a second, for under, probably a cent in electricity, and so there’s this…we often look at the western world, or the developed world, but across the world, we’ve got this huge problem with access, to particularly technological medicine, imaging and things like that, and if you can supply imaging for such a low cost, then it can rapidly roll out.
So, I mean, that’s kind of the blue sky, hopeful picture that I have in my mind.
Louise closes the conversation on the exciting prospects in radiology, and thanks Luke for sharing his knowledge and insights.
[37:02] Louise: I love it. Okay, well maybe we should end up there on a very positive note, and knowing that it’s not a dystopian future quite yet for radiologists. Well, thanks for spending time with me today, Luke. I’m looking forward to your lecture tonight.
[37:15] Luke: Great, thanks for having me.
[37:16] Louise: Thanks for contributing to the podcast.
[37:17] Luke: Cool, thanks very much.
[37:18] Louise: No worries. See you.
Check out
Luke’s blog — https://lukeoakdenrayner.wordpress.com/
Contact Us
Suggest a guest via dissectingdigitalhealth‘AT’gmail.com
Want to learn more about digital health and health informatics — join HISA: Australia’s Digital Health Community www.hisa.org.au