Mastering AI for interventional radiology: current status and future outlook
Judy Gichoya, MD, MS, FSIIM
Hear about the transformative role of artificial intelligence (AI) in healthcare with leading physicians and experts. This episode explores AI’s applications in diagnostics, patient monitoring, and clinical decision support, while addressing challenges such as data integration, ethical considerations, and the future of AI-driven medicine.
Episode Transcript
Recorded live from the Cook booth at SIR and featuring leading experts in the field of interventional radiology discussing a wide range of IR-related topics, this is the Cook@ SIR Podcast Series.
Hello and welcome to the Cook@ SIR podcast. We are live at the Cook booth. I’m Dr. Judy Gichoya from Emory University, and I’m thrilled to help you lead down a journey as we explore the incredible potential of artificial intelligence in medical practice. Today we’ll discuss real-world applications of AI and practical tips for integrating this technology in your practice, and the essential training tools that you’ll need to stay ahead of this evolving field. So I can promise you that this is going to be an exciting ride, and I’m not alone. I’m here with Dr. Dania Daye from Harvard, and I don’t want to butcher her introduction, so I will let her introduce herself.
All right. Thank you again for having me, Judy. And my name is Dania Daye. I am an interventional radiologist at MGH, an associate professor at Harvard. I also run a small AI lab at the Martinos Center looking at translation of AI into clinical practice.
So, Dania, I have to tell you, I’m very, very surprised at this SIR. we’ve always had this panel. We’ve actually been on it together. And yesterday there was a AI-generated daughter lecture. One word for that: How did you find that?
Oh, that was very impressive.
One word, Dania. Okay. So I asked a few people; they said creepy. They said interesting. But I think one of the things that I connected with was someone who said this type of ability would have taken people months, or would only be limited to experts like you to get done. Maybe you can help our readers understand who wanted SIR to see, what was there, what was that AI-generated daughter. Maybe that’s a good place to start.
No, absolutely. I mean, I think this goes to—without saying that this field is advancing so, so quickly that it’s putting these tools in the hands of anyone in the community, not necessarily those of us who do AI research. And I mean generative AI, as we all know, has really expanded drastically in the last two years. I mean, can you just think to about two years ago, when we all just heard about GPT, and now almost every single person you have met is using it, and the applications are expanding. Now we can overlay it on the images of dead people, like we saw yesterday, have them talk. I mean, definitely it’s something to be used with caution, I’m going to say, but it drove the point home yesterday for sure.
Okay, so fast-forward to today. There was—again, this is a very, very special SIR, because it’s the 50th anniversary, so when you hit the big milestones, everyone wants to celebrate. I believe that there’s even a Lego box that I missed getting—but today there were all these luminaries who were saying—and I have to tell you that I’m pretty surprised—there were these luminaries who were saying, “Look, we have to think about innovation different.”
We are at Cook, right? Most of these early stories that even with, “Oh, I would call Cook and they would make this catheter.” So what do you think, again, is the impact, or what is different about AI in interventional radiology?
I mean, I think we all know that IR is built on innovation. I actually was doing some research just last week for a talk I’m giving, and I did not realize in a 40-year period, or I believe 35-year period—practicing IRs. members of SIR—had 2,400 patents.
Wow.
Twenty-four hundred patents, and this is actually a published paper.
Okay.
So that tells you how much innovation we have in this field. And because of that—I really have to think, I mean, AI is definitely the next frontier in medicine. I think we all agree on that, and we as a specialty have embraced innovation certainly much quicker than some of the other specialties. We really have to be at the forefront of this. I think many people are excited about AI. There is, I would say, cautious excitement. People still don’t really understand where it fits. Most of the applications we’re seeing today are in diagnostic radiology, but we’re definitely starting to see more and more applications slowly making it into IR, and it’s very exciting times.
I just want to dive right in into sort of what we are here to do. So what are the current applications of AI in your practice, and how are they being used?
Yeah, and I mean if we look right now at IR, I always like to think about five different buckets for the applications of AI in IR and really following the patient journey. So really thinking about patient selection for our procedure patient triage, where I believe right now we have most of the products that we’re seeing in the market that are being used in most IR divisions are in this category. And I’ll talk a little bit more about them in a second. The second category or bucket I like to think about is preprocedural planning. And also there are a couple of products that all of us use, especially those of us doing Y90. A lot of the segmentation algorithms, Y-90 planning, are based on some AI-cleared algorithms. The third bucket that we usually like to talk about is really intraprocedure support. There are some algorithms on the market right now in terms of guiding which vessel to go embolize, et cetera. I think many of us use some of those products, but I would say we’re still in the infancy. There’s so much potential in that space, but there is definitely an issue with the data sets that are available in IR to allow us to speed the development of those advances as quickly as we would like. And the fourth bucket is really looking at algorithms to predict patient response and prognosis prediction to really allow us to select the best patient for the best procedure for the most optimal outcome. And this is an area I do a lot of research in. And the fifth bucket is really looking at the applications of large language models, generative AI, and augmenting patient-centered care. And we are seeing a very, very rapid expansion in the applications of AI algorithms in that space, especially over the last two years.
So just to go a little bit deeper into what each one of these—to give some examples for our audience to get a better idea of what we’re talking about. So on the patient selection and triage, today, we’re seeing a number of algorithms on the market really looking at acute vascular findings, so automatically detecting patients with aortic dissection, AAAs that are about to rupture, looking at PEs, and I think many people right now are using some of these PE algorithms for their PIR teams, and we’re starting to see some signal in terms of very interesting data about how this is affecting and changing the referral patterns to IR procedural services in the hospital, how it’s also speeding the interventions. There are also some very early papers starting to show some effect on outcomes that I’m happy to also talk about a little bit more
Just before, I know you—and we are going to take this walk from just where you get the patient and the whole journey. I love that. I’ve seen you speak about this several times, and so we will get a chance to unpack that. I want to go back to the clinical outcomes. So you’re saying, “Look, here is early intervention. We are seeing maybe difference in mortality.” Do you think this is in the high towers of Harvard or in my village, I can get access to this algorithm? Tell us more. What have you seen outside, especially for that initial trial, do you know?
No, absolutely. I mean, I think your question is really spot on. The use of these algorithms right now is still limited to a few places. I mean, we’re just going to say it as it is: These algorithms today are very expensive to implement, and there are lots of challenges to implement them, so their use is still limited to a number of probably higher-powered places, I’m going to call them. But we have to really start looking at some of the data that those places are producing. And I specifically mentioned the PE algorithm because there was a paper that was published last year looking at the effect of the PE algorithm on mortality. And they have shown that they were able to significantly decrease the time it takes to get the patient to intervention in the case of high-risk PE. And they were able to cut their mortality from 8.8% pre-implementation of AI to 2.2% post-implementation of AI. I mean, that’s a sobering number. These are patients’ lives.
That’s half, that’s more than half, right?
That’s more than half, that’s a lot more than half. And I think this is something that we have not seen reproduced in other places. So I would still take with a grain of salt. This has to be reproduced in larger studies, but there is some signal there. And if this pans out and more places are able to reproduce this, this might as well become the standard of care, because we all want better patient outcomes.
So one of the discussions today actually, when there was a panel, there was the turf wars. And at Emory we take care of PE patients as interventional radiologists and also the cardiologists, right?
Absolutely.
We have this encroaching sort of spanning people who want to get into this space. Do you think AI becomes an equalizer, and is that dangerous for us?
Yeah, no, and that’s a very good point. Definitely, AI is allowing some equalization, because it is allowing access to a more specialized skill set that currently may be limited in some places. However, I mean, the way I think about terf wars is that at the end of the day, the patient is our true north, and as long as we work collaboratively across specialties to really serve that one patient, I think this is probably the better way to start thinking about how we should take care of the patient, because what these algorithms are going to do is that they’re going to help us identify a lot more patients that are going to need intervention. There’s going to be a need for more and more proceduralists to do all of these procedures. I think there’s going to be enough to go around and we all need to work together to be able to serve everyone.
Okay, so the pie is big, according to Dania. So this is initial, there’s triage, right? They’re still a human, but we are seeing new care patterns. Maybe the interventional radiologist is actually coming forefront because of this activation, for example. If you’re part of the party, what’s the next journey in the patient’s workflow that AI can be applied to?
So once the patient has been selected, it has been decided that this patient qualifies for procedure, coordination of care has happened, at that point, we start the process of preprocedural planning. And with preprocedural planning right now, there are a couple of different algorithms on the market, not necessarily for PE, but I’ll do the example of Y90. I mean, for Y90 procedures, we all need to do some form of segmentation to calculate our dose. A lot of the segmentation algorithms that are currently available on the market for the softwares that most of us use are actually cleared by the FDA with an AI component to them.
I don’t think many people know that they’re using AI, but many of them, the semiautomated segmentation is AI-based. And there is some early work that we’re starting to see in terms of device sizing, where there are some papers being published where AI can do some automated segmentation to help. For example, size EVAR devices among other things. So we’re starting to see some signal there, but I think it’s still in very early stages, and there’s still a lot of work to be done in that space.
So I feel like this preprocedural planning is this area of, I like to say this, what’s in a name? So every time you walk through the booths, people are saying, “There’s AI, there’s no AI.” Does it matter? If you are someone who’s trying to say, “Okay, I want to understand this,” do you really need to know? What should you be asking when someone says, “Oh, but this has AI”? Should you believe it, or even does it matter if, you know?
Yeah, and I think that this is a great segue and really trying to get better training in terms of what to ask companies and how to talk to companies, and how to best evaluate any algorithms you’re considering to buy for your practice, which I know we’ll talk about a little bit later in this podcast in terms of implementation. But I think it’s very important to really think about getting some training in terms of what are the questions to ask when algorithms fail, how the algorithms were trained, and even if you end up trying to implement something locally at your institution, looking at your performance locally, sometimes companies will tell you this algorithm performs at 98% accuracy, then you come and implement it locally and those numbers do not quite hold up on your patient population or your data set. I mean, should we care that an algorithm has AI? Honestly, I see AI as just as a tool to make our life easier. I don’t think it really matters how it’s done as long as it has good accuracy and it does not fail often.
Okay. So we have a new what? AI is a tool, so we, what, use this tool for preprocedural, for patient selection, preprocedural planning. What’s the next one?
So after that, once we’ve done our preprocedural planning, comes time for the procedure. So, starting to think about things for intraprocedural support. For intraprocedural support, today there are not as many algorithms, and this is really stemming from the lack of standardized data across IR. It’s very hard to find labeled angiographic data sets for people to be able to develop algorithms on. Most of what’s available today tend to be more guiding embolization around vessels, some segmentation like algorithms. I think there is a lot of room for innovation in that space, but this is probably one of the spaces that still needs a lot of work.
Okay. And do you really think we can ever get to a standardized data set?
No. I think we need to develop more computational techniques to get around that problem, yes.
Okay. Okay. Okay. So you’ve done the procedure. Is that it for AI, or there are other things that we have we can use it for?
I think, so after we do the procedure, all of us tend to see our patients in clinic. So in clinic, really starting to look at algorithms for patient response prediction, prognosis prediction, longer-term prognosis prediction. And as I mentioned, this is an area I do a lot of work on in my lab, trying to segment tumors and predict a longer-term response to therapy, predict likelihood of a longer-term mortality depending on the question at hand, et cetera. And this is a very active area of research, but definitely this is an area that needs what we call multimodal data sets, where we combine clinical data, pathology data, genomics data, imaging data. And this is an area that many, many people around the world are working on, but I think still in its infancy, unfortunately.
And the journals have been very, very good at allowing us to do this. I remember that I sort of collaborated with IRs from Canada—I believe that was last year—for us to sort of write this future we envision. And it was like, oh, you’re tired, you are on call for trauma, and you walk in and the AI system has flagged a study that it has a PE and has already done some scoring of the PE, whether it needs intervention. The decision has been made to actually do the procedure. And once that happens, the next sort of step is that it activates your team, it sends the notification to the patient with a tablet, and they can get consent. And it 3D prints the catheter that you need, because it understands the anatomy. And it even—when you’re in the case, it shows you your imaging, your plans, procedure. You do the case—
It’s IR for the future.
Yes. Do you think we should be doing that? Walk me through what you really see in the IR of the future, if you were to actually just say.
I’ll start by saying right now, there are a lot of flashy applications that people are promising, and I think it’s very important for us to focus on one thing, and that’s the value. What value is any of this bringing to IR? Because at the end of the day, I mean, research is an area where we all get to do whatever we want to do. Are these ideas always practical and have a robust business plan for actual implementation in clinical practice? And I would argue probably not. I think at the end of the day, the things that make it all the way through are the things that bring value, that have effect on patient outcome, and that lead to improved practice efficiency. And if those solutions do not meet any of those, I don’t think those solutions will ever make it into real clinical applications.
Okay. So every time I try to figure out what’s the state of the field, I go to the ACRs, FDA-approved algorithms, and say, “Okay, let’s see what’s here.” IR doesn’t even exist today as a category. And you are saying value. I don’t even need to be a computer scientist to figure out the value. What can listeners do? What does it take for you to really think, “How is this tool going to help me in my practice?”
No, absolutely. And I think we need, as a specialty, to start thinking about metrics. I mean, at the end of the day, a lot of these databases has—they tend to have a lot of long lists of different algorithms and their performance metrics, et cetera. But what really matters at the end of the day is the outcome measures. What is that outcome measure that’s going to help me improve my practice? Is it something that’s helping me improve patient outcome, helping me improve patient mortality, helping me do my procedure in half the time that it takes to typically do a procedure? If there is really no effect on those metrics that really matter to you and I as practicing IRs, I don’t think that the algorithm actually has real value. I mean, both of us, I know, go to all the computer science conferences, and for many computer scientists, they’re brilliant in what they do, but at the end of the day, they have a hammer. Everything for them is a nail.
Wow. Okay. So I’ll tell you that the last—I believe it was last week—the most important thing that could have saved my time was a transporter dedicated to IR.
Yeah. Absolutely!
It’s not AI or anything else. If they got my patient on time in the room, it would’ve completely transformed me. But I don’t want to poo-poo or just sort of play down the impact of this technology.
No, no, absolutely not.
And so, I want to maybe—
For the right application, it is very powerful.
Yeah, so you know, one of the prestigious things, I mean the fourth pillar of oncology care is IR, and there has been a lot of work that we are seeing today on clinical trials. I know that some work from Emory saying you can automatically actually flag patients and direct them to clinical trials. And for listeners who may not be familiar, clinical trials are expensive.
Absolutely.
They take a long time for them to recruit, and most of them, the patients who are eligible are not even asked about this. So when I think about this, you know, you have actually started to say, “How could AI be used in this?” Maybe walk us through not the end-to-end patient life cycle, like directly what it takes to do a procedure, but what are these other things that can really bring value to us as interventional radiologists and our patients?
Yeah, no, and I mean I think starting really to think about that patient journey as a continuum, starting from the very beginning in terms of patient selection, I think AI is going to allow us to identify patients that typically were not identified before that would benefit from a procedure. So I think that’s the first one that I see. The second one is allowing us to get them in intervention quicker. I think the paper that I mentioned about the PE, they showed that they were able to reduce the intervention time significantly more than 50% in terms of time to intervention, because of the care-coordination component on the AI platform that they were using. In terms of preprocedural planning, I think once we get to the point where we’re able to predict what size stents we need or graphs, et cetera, we can potentially eventually reduce our shelf stock for certain things or preorder things to make sure we have the right devices for the right procedure. I think for the patient response and prognosis prediction, the value really comes from being able to get the right patient for the right intervention for the most optimal outcome. I don’t think we are there yet in medicine in general, and I think AI one day does promise us to be able to get to that point of being able to do true precision medicine.
Okay. So tell us a little bit about your work on clinical trials and the intersection of artificial intelligence.
Yeah, I think a lot of the work on clinical trials right now with AI is really focusing on helping with patient identification. I think one of the hardest things is really trying to get the right patient with the right clinical trial. And I mean, we both know that one of the main reasons many clinical trials end up closing is insufficient enrollment. So AI can really help with that by going through charts, really looking for key words, and right now we’re seeing some very interesting applications with large language models, because the large language models are able to go through vast amounts of data and pick up certain terms and create databases that are allowing us to flag patients that normally would not have been flagged for clinical trials. So I think that there are definitely some applications that are going to improve very quickly how we practice once LLMs become a lot more mainstream than they are right now, and access the entire EMR.
Do I need to attend my tumor board discussion, or will AI attend that for me?
I just presented a paper about AI helping with tumor board recommendations, but I will say we are not there yet, for sure. We still absolutely need to be attending our tumor board, but I think there is going to be potential with time to provide tumor-board algorithms that are optimized, trained on very specialized tumor-board recommendations to be used in not-well-resourced settings in other countries where they do not have access to that expertise. Is it going to ever replace tumor boards? I don’t see it in the near future or anytime soon, but there are definitely opportunities in terms of helping not-well-resourced places.
Okay. So we’re going to end this session just giving you a sense of the various flavors of the current applications and usage with one question, which is, What was your first application that you developed, and why did you pick that?
I have been in this area for quite a while, and I’m going to go back to when I was an undergrad student before AI was even hot. And back then, it was the very first time I ever heard of the word machine learning. That was back in 2006 or 2007. And we were trying to develop a machine learning algorithm in our lab to predict response to an optical imaging technique that we were developing. And I got hooked ever since, and I have been in that field since then.
Awesome.
So I mean, I think this is a really nice transition, now that we’ve talked about the different applications, into how do we implement this clinically now that we know all of these applications? I know you’ve implemented some of these techniques in your practice, so what practical tips do you have for our audience here?
So, at Emory University we have a radiology AI council. It’s actually composed of all clinical representatives, not just—I am the interventional radiology representative, but it’s composed of all subspecialties of radiology—and we have a checklist. Now, there are many checklists, so things like the CHAI initiative—if you search for all this, everyone is trying to come up with a checklist, and they can help you vet what the initial application is going to be. And so, what we’ve done with that is the person, the champion, completes the form and they bring it to the council, and then the council reviews this—and we review it with similar lenses, like what you are saying, right? What value does it bring? And it’s not just value. We also look at the unintended consequence, which I think people are really not appreciating. It’s the destruction to radiology. So some of these applications now warn you, most of us in IR, they don’t give us the best monitors, but if you look at diagnostic radiologists, they have like five monitors, and now they want you to have another tablet for you to review AI results, so it’s impossible. And what we are seeing is that it doesn’t matter how good your algorithm is, if you cannot make it seamless into the interaction to the workflow, then no one is going to use it. I mean, even just clicking one more thing, they’re not going to use it. And so, that’s been a very early lessons for us, and bringing it in this sort of harmonized way allows us to decide, okay, should we do a silent trial? And a silent trial, for our listeners, is a way you’re just running the algorithm without interrupting, giving the results back to the radiologists or the interventional radiologists. You’re just trying to say, “Okay, let’s see how this works in the real-world setting.” And then some people, if we don’t understand something, we bring the vendor to come and discuss the application with us. I’ll tell you one pitfall with the vendors, they’re not open to it, but it’s still a small community, is that they don’t tell you who else is using the algorithm, so that you can ask them, “What’s been the value for you?” Because I truly do believe that AI development and implementation is local. You understand your local problem, you understand the local value that’s provided, and you are going to understand how you’re going to deploy this technology. So for us, having the radiology AI council has streamlined this process, and that’s at Emory University, and we also have a different healthcare system where we work, which is Grady. And I’ve also been able to participate in that, and that streamlined process has made it very easy. And we’ve reviewed 13 applications. We’ve actually rejected some, and some that are already in deployment, we are saying we’re going to stop them.
I think that you brought a very good point here is—conversations with vendors and how those conversations can go. I’m curious if you can maybe shed a little bit more light about what questions do you typically ask your vendors as you’re evaluating an algorithm for your institution?
So the checklist has pretty much been easy. And by the way, we will hopefully get it out for other people to use it. And you could first maybe wonder why not use just another checklist? You do research for every paper, you’ll now almost have six checklists to complete a good AI job. And what we found is that it is one of the biggest pillars that the non-imaging AI is very different from imaging AI. This is not integration to Epic. This is not “I’m going to pop up on alert and decision support for you.” You’re talking about whether you’re integrating with the voice dictation system, you’re going to integrate with a PACS system, or you’re going to integrate even in a third space that now requires things to go to the cloud. So that’s one of the technical know-how that we ask our vendors. Now, some people may not have this problem, because they may have a marketplace, and that all means is you have a bucket and just like your phone in the app store, you have all these apps, so once you do the hard work of getting that app store in your enterprise, then you have the options of what’s available then. And so, we ask them, How are the results displayed? What happens with also getting the results back, the feedback? Who else is using this algorithm? And specifically, because of our group, where we look at how models fail, we ask them specifically, What cases does this model not work on? The vendors don’t like to answer that very commonly. And even if it’s not usually provided, we do ask who else is using this, just because we want to have a sense of that.
I think these are very, very important questions. I want to dig a little bit deeper in something you mentioned. You mentioned the checklist, and I know all of us at this point hate checklists.
Yes.
I’m just wondering if you currently have a scoring rubric when your council is reviewing which algorithms to consider, and what are some of the domains or examples that go into the scoring rubric?
The first thing is really around also cost, because there are many cost modalities, approaches. Some people will tell you per study, but if you think about our PACS system. PACS is where all our images are stored, and if you think about PACS and all the studies, let’s say that have the CT, are going to be run to Dania’s algorithm and you’re going to charge me per study, that’s a lot of money. So we have this understanding of the cost structure, and then we have this understanding—I’ll tell you, most of the algorithms fail really because of the integration. It really is where we know that radiologists are not going to use them, because there’s no one who’s coming to with an FDA-approved model that doesn’t have reported good performance. Let’s just take a moment, because it wouldn’t have been approved in the first place.
Absolutely.
And then, we also look at, Is this a problem that needs to be solved, right? Because if you just have, you say, “Oh yeah, this is really cool. Only Judy reads these studies. You don’t need one algorithm just to support Judy.” And so, our rubric really tries to capture more qualitative information more than just, “Hey, how many studies was it trained on?” Because the reality, Dania, it doesn’t matter. If you should say that by the time the algorithm—and I know this is a very naive approach of approaching it—but you should just give the benefit of that, that it’s past the basic and that the basic may not be good enough for you. You just need to figure out how to get it to the next—
And this really goes towards what we were saying before about quantifying the value. And this is really what these rubrics try to do, is going beyond some of the metrics that are presented to you, to really quantify value.
Yeah. And I want to actually talk about that. And it doesn’t happen at the point where you’re purchasing an algorithm, that when you’re purchasing an algorithm, they’re going to just say, “Hey, here’s the accuracy, here’s the F1 score, here’s the sensitivity, the specificity.” But you have to think about what the task is that you want to solve. Because if you’re saying, “I have breast cancer, I’m doing mammogram screening and I want to catch all breast cancers,” then your threshold for your sensitivity and specificity are going to be very, very different. And a tried algorithm, you’re willing to tolerate more noise. And I’ll tell you, I listened to this experiment from a group from Stanford, and I was very impressed, because what they were doing is saying, “Look, our clinic can only see 20 patients.” I’m imagining 20 patients. “So even if your AI algorithm gives me 50 patients, what am I supposed to do with them? They have nowhere to go to.” So they were saying, “Look, if we say we can optimize our IR suites, today we do five cases, to do seven cases, then we would need an algorithm that catches this number of cases with this yield for us to be able to match the demand that we have.”
That was such a different way. And then they looked at the cost. They said, “This is how much money our hospital system would make.” It sounds bad, because me and you, we are researchers, so we don’t usually chase the money. But I was very, very surprised, because to me it brought this honest discussion of, Do we need this or do we not need this?
And I mean, I really want to emphasize this point and dig deeper a little bit into it. I feel on the research side, we’re interested in developing the algorithms, but on the business side of things—and really the question and the fundamental question right now in terms of widespread implementation of AI is, Who is paying for it? And right now, I mean, we both agree not a lot of algorithms have demonstrated the right ROI. So I’m curious, how do you think about this, and how do you think the field is going to progress such that we have a model where there’s going to be a structure to pay for AI so it’s implemented more widely?
So now, first of all, we all acknowledge today, in 2025, we don’t understand how healthcare dollars are going to be shipped out. But there’s so many changes around our world. And so, I think what people forget is that we have this pie. Unfortunately, unlike the pie you described earlier where there’ll be more procedures, this pie of healthcare dollars is fixed—it cannot keep growing—and so, you have to take out something for you to create the AI. That’s really what I believe. Because if you look at budgets, one of the leaders from Stanford, Nigam Shah, likes to do this analysis where they go and look at what’s the budget that’s actually allocated to IT groups. And when you look at it—and I’ll give an arbitrary number—maybe Emory only gets one million, then I’m not going to pay for an AI system for one million when I need to put homework stations.
Absolutely.
So I think unfortunately, we will see CPT codes. We are seeing a law that is saying, “Hey, maybe AI could be autonomous,” and that just means that could work on its own to actually do the prescribing and move forward with that. And we’re seeing quite a lot of interest, and especially if we are going to go through a phase of deregulation, we’re going to see a lot of algorithms pushed out. But the pay, if it cannot come back, because if it’s just value to the radiologist and it helps you read your MRI first, if you go back to the rack, then that study gets degraded.
There’s no way it’s going to get a reimbursement code.
Absolutely.
And I think this is the thing, is like, is it the responsibility of the payers? Is it the responsibility of the health systems to pay for AI? And I mean, I think we are in very early stages. We do not know the answer to that, and I think things are going to change quite a bit in the next couple of years.
Switching gears a little bit, one of the questions I get a lot from trainees that I talk to is, How can I gain more training and knowledge in AI? I mean, AI is booming in radiology and IR and procedural services. How can people learn more, either trainees or even people in practice?
Yeah, so I’ll tell you, because of these integrated training pathways, most diagnostic residences where our IR trainees are have very robust AI training. There’s the NIIC-RAD course provided through SIIM and RSNA. Some of the courses, they have dedicated training tracks. I know that Emory, we have an informatics track. I know that I talked recently to Tejani, and they’re having their informatics track residents actually participate in the council, the AI council, vet some of the false things and explain, “Oh, I think this is not working, because of blah, blah, blah.” And so, I feel that our residents are eager to learn and there are increasingly new opportunities. Now, there may not be specific ones for interventional radiologists, but what I can tell you is that the same knowledge for diagnostic is, for this one task, is only harder in interventional radiology, but it’s the same. You know what I mean? Segmentations are the same.
And it’s learning how to ask the right questions, learning where the algorithms fail. And I think this is the skill that we really need to teach our residents who are trying to use this. I was having a conversation yesterday at this meeting after our AI session, and we were discussing, does AI right now hurt the training, especially of our junior residents?
So again, an interesting question, and we actually discussed this recently at AUR, which we… So at Emory, the trainees don’t get to see AI results. And maybe one of the things we haven’t really discussed here is the ethical concerns, and automation bias is one of them. And so, for our listeners who may not be familiar, it turns out that the answer you give Dania can mislead her. If you say, “Dania, I think there’s an aneurysm here.” And even how you say it, you could say, “I see an abnormality,” versus, “I see an aneurysm.” And those two could have completely different impacts downstream on your performance. So we know this. And so, what most people have said is that they’ve said, “We are not going to expose our trainees to AI.” And that’s not universal across all training institutions. And I think, as of today, if we also don’t look at where the private practice, sort of where—I mean, remember, most of the practice of radiology is still in private practice. Now, advanced IR is in academic practices, but if you’re not honest, then they’re not going to be able to work in these areas. And I remember at the AUR session, someone came up and said, “Oh, I got these trainees, and we had these pictures, and they said, ‘I can’t see anything.'”
And it’s the same we see with our trainees, when they’ve been working in the most new angio suites, and they go back to an old room and they’re like, “Ah, I cannot see my wire.” And you’re like, “Oh, I know you cannot see your wire, because you’re used to the newest machine.” And so, you could say that that can be fixed with AI. And so, I feel that we don’t know about AI in training. And I want our listeners to really take that point. There are things that we’re saying we don’t know, and that’s okay. And that’s one of the takeaways of this podcast, is just to remember that we are in this place, we’re giving you the best information that we have today and some we don’t know, and it’s going to change. And you just have to figure out how to keep being plugged, so that you understand when the changes are coming through.
And I think this is very well said. The real take-home point here is we are still in the infancy of AI in medicine, but the future is certainly very, very bright.
Yes. Yes, and I want to also encourage our young trainees, and that’s because this last week—because today’s Monday—Bill Gates said we will not need doctors soon. And I was hoping that he would say that next year. But one would say—10 years ago, Geoffrey Hinton, who’s won the Nobel Prize last year, said, “We don’t need radiologists.” And if I look, we step back, we look at how AI was initially with the supervised approaches and the deep learning approaches, we said, “Oh, the adoption is so slow.” And now all of a sudden with the generative network and the foundation models, we are saying, “Ah, we need to slow down. This is too fast.” And so, we absolutely agree that the ship has sailed. But I do want to really reassure—I mean, if you choose IR, obviously it’s a pretty great profession—but I want to say that if you look at our own specialty, and for our colleagues even in diagnostic radiology, I think that the hype unfortunately ends up harming us, because if, 10 years later from today we still have a job, then me and you are going to keep showing these lessons, like, 20 years ago, Geoffrey Hinton said this, and 10 years ago, Bill Gates said this. So I want to really encourage our students to be able to be in a place where they can appreciate how rapid this technology is changing, and also be able to just slow down and really just enjoy what it means to be an interventional radiologist, because we’re really a special specialty for this.
And I mean, definitely AI is going to augment how we practice, is going to make us better IRs, for sure. And I will definitely emphasize, don’t believe the hype. It’s a very complex question, especially for those of us who do a lot of research in this field. We are very far from getting to the point where AI is going to replace anyone.
Okay. So I think we are coming here to the homestretch, unless you disagree. And I want to end up—I think that we kind of had a little bit of a low mood there, because we are threatening our sort of specialty could be extinct. It’s not. But what are you excited about right now?
I mean, I think there is a lot to be excited about for the future. I mean, I think large language models specifically and generative AI is definitely going to change how we practice over the next decade or so. We’re already starting to see some of those effects, and I’m very, very excited for the future. And I mean, it is my belief that AI is eventually going to allow us to pick the right procedure, right device for the right patient for the most optimal outcome.
And so—
How about you?
… top three things—oh, I will tell you what I’m excited about—but top three things that you think our practicing clinicians today can actually, how can they start to use AI?
So I think right now, based on what’s available on the market, definitely learning a little bit more about some of the patient selection and triage software, especially given some of the signal that we’re starting to see on patient outcomes, would be very helpful. Starting to learn how to talk to the vendors and what questions to ask would definitely be very high on my list. This is a skill that I firmly believe we need to teach to this generation of radiologists as AI becomes more mainstream and seen in every practice, and really starting to ask questions and learn more about the space.
Yeah, so I’ll tell you that—and this is lack of preparation from Judy, right?—I came to this meeting—of course I prepared for the panel yesterday—I came to this meeting, and I just did not expect to see so much AI showcased here. And it took me a little bit over a few years when I started going to RSNA, where we were beginning these conversations. And I have the same feeling when I go to ASCO, where I go to AHA, other non-radiology societies that I assist with, with the AI journey. And we had Alan Matsumoto and all these people now speaking about AI. And that was, I will tell you, very surprising to me, because I felt that IR, which is radiology, in the family of radiology, was behaving almost like an infant, trying to figure out what AI is. And so, I hope that our listeners are going to really be curious—which is what we have to do—and not to be overwhelmed with the hype that is going to come, to also step outside their comfort zone, because ambient listening, which is the AI that helps you document your clinic visits, is actually being rapidly adopted. It probably is being adopted in your hospital today and you’re unfamiliar.
Absolutely.
And so, if you could just step outside, which something that IRs are very good at, if you could step outside and go to say, “Hey, here are some tools that I can be using for my work,” I believe that you can actually move the needle. So don’t be overwhelmed, but don’t stay too comfortable or waiting, again, that diagnostic radiology will be the way that IR goes. Just step out of your comfort zone, see what the cardiologists or neural IRs are using, and use that to help advocate for yourself or move the needle forward.
Absolutely. We definitely all need to start thinking about how we’re going to harness AI to make us better interventional radiologists and better serve our patients. And on that note, we’re probably going to stop here and thank our listeners for tuning in.