New tech tool could help with some tasks, wouldn't work with others
Transcript
[00:00:00] Host Amber Smith: Upstate Medical University in Syracuse, New York invites you to be "The Informed Patient" with the podcast that features experts from Central New York's only academic medical center. I'm your host, Amber Smith. We've heard a lot lately about how artificial intelligence is being used in the computer industry, in manufacturing and social media. Today we'll explore some of the uses in medicine and healthcare. My guest is Dr. Amr Wardeh. He's a resident at Upstate with an interest in artificial intelligence and informatics. Welcome to the informed patient, Dr. Wardeh.
[00:00:36] Amr Wardeh, MD: Hi, Amber. Thanks for having me.
[00:00:37] Host Amber Smith: I'd like to start by asking you to define what AI is because we hear about its use in so many different fields.
[00:00:46] Amr Wardeh, MD: Oh, absolutely. So AI, or artificial intelligence, broadly speaking, is a field that aims to make computers perform tasks that we usually attribute to humans, or to simulate human intelligence. For example, most of us use Siri or a Google assistant, and we can ask Siri, "Hey, what is the weather looking like today?" And it will give you a pretty good answer that kind of simulates what a human would answer.
[00:01:17] Host Amber Smith: So does AI depend on having a lot of good quality, robust data in order to be able to do that?
[00:01:27] Amr Wardeh, MD: Oh, absolutely. A big part of AI is to be able to train these computer algorithms to perform tasks accurately. And in the field of computer science, there is a very common saying that is, "garbage in is garbage out." So if you don't have good data you are not gonna end up having a good output or a good answer from these machines. So it's actually crucial to have good data.
[00:01:57] Host Amber Smith: As we have more and more good data, would it be possible that AI would become smarter than a human?
[00:02:06] Amr Wardeh, MD: That is, that's a very good question. So the more data we give these AIs, the more they will be able to learn and know. And I'm going to refer to something that's been really making waves in many industries, which is ChatGPT, and these large language models. And so ChatGPT has been trained on an enormous amount of data, and because of that, it is able to make very intelligent sounding answers when you interact with it. And more recently, OpenAI, the company behind ChatGPT, released a newer version of ChatGPT based off of something called GPT-4, which is even trained on more data, and it performs even better than the older model. So the more we train and the more we data we put into these systems, it seems like the better they become, more or less.
[00:03:11] Host Amber Smith: Is ChatGPT, is that the same thing as machine learning? I've heard the term machine learning, but what does that mean?
[00:03:20] Amr Wardeh, MD: Machine learning, broadly speaking, any computer algorithm that is able to learn from some data, whether it is image, text, or even sound data, and then able to make a prediction or give you an output based off of that. Specifically I want to talk about something within machine learning, or a subfield of machine learning, called deep learning, which is what ChatGPT is based off of, and most of these new AI things we're hearing about is based off of.
What deep learning is, is trying to simulate how our brains are structured in a computer machine, and then trying to teach that simulated brain, let's call it, how to perform a certain task. And it turns out, well, if you do it right, you can make them do amazing things, like detect cancer or chat with you and write essays for you.
[00:04:17] Host Amber Smith: That's pretty fascinating because when you think about the first computers, they were able to I solve mathematical problems, but you're talking about computers that you teach to be able to learn, basically. Do they have independent thought?
[00:04:35] Amr Wardeh, MD: It's really crazy when you think about it and the types of things these things are capable of doing now.
Do they have independent thought? As of now, no. What we have right now is something we call Weak AI. So these are AI systems that are able to do specific tasks. So for example -- even though that is kind of changing -- but let's say an AI that can detect cancer on chest radiographs, or x-rays. That system can only perform that one task. It might be able to do it very well, but it won't be able to do anything else really well.
But the idea of having an intelligent system that can do multiple tasks and learn like a human would across multiple domains is something that is referred to as artificial general intelligence. And that is one of the goals, actually, depending on who you ask. That's the goal of some people who make these AI models is to create this artificial general intelligence that can do everything that a human would be able to do.
[00:05:46] Host Amber Smith: Well, let's focus on the field of radiology since that's your specialty, and specifically on mammograms, which are used to detect breast cancer. Typically, radiologists look at a mammogram for early signs of cancer, but now there are AI systems that can double check the radiologist. Can you explain to us how that works?
[00:06:08] Amr Wardeh, MD: Yeah, absolutely. There are a lot of new AI tools that are coming up where they can point out where a cancer might be on a mammogram, and even on other types of imaging like chest radiographs or bone radiographs that tell us, "oh, there is a fracture" or "there is a pneumonia," et cetera.
And these AI tools are proving to be extremely good, in the sense that they are a machine so they don't get tired. They will do things in a very systematic way, most of the time. So, where a let's say a human might not be able to maintain focus or might miss a small thing, an AI might be able to pick up on that and actually aid the radiologist. So there is a lot of talk right now in radiology and other areas of medicine about how to incorporate these AI tools to help us help the patient better and not miss cancer, for example.
[00:07:23] Host Amber Smith: So in this instance, is the artificial intelligence that's reading mammograms, is it learning from each mammogram that it studies. And is it getting better the more that it reads?
[00:07:35] Amr Wardeh, MD: Most of the AI tools that are available now, they don't really learn once they are deployed into a clinical environment. So they simply do the tasks they have been trained on initially.
Some newer companies and groups are working toward making these systems that kind of gradually learn over time from feedback or continuously learn over time. That is something that will be happening. But as of now, once the AI has been trained, it is locked. So it doesn't learn anything new. It just does the task it has been trained to do.
[00:08:18] Host Amber Smith: This is Upstate's "The Informed Patient" podcast. I'm your host, Amber Smith. I'm talking with Dr. Amr Wardeh, a radiology resident at Upstate, about the use of AI, or artificial intelligence, in medicine.
I read a study in the journal, Pediatric Radiology that looked at bone fractures and whether they could be spotted on X-ray. The study showed that AI scored higher marks than emergency physicians, but it could not beat experienced radiologists. Does that surprise you?
[00:08:49] Amr Wardeh, MD: Not really, actually. Because recently we are seeing a lot of studies coming out showing that AI can perform on par with radiologists in certain areas. An emergency doc, even though they might see a lot of fractures, they are not trained to look at images like a radiologist would. And so an AI that is trained to detect these fractures would perform probably similar to a radiologist rather than an emergency doc. Surprisingly, but unsurprisingly, this is expected.
[00:09:26] Host Amber Smith: What other areas of medical care do you think that we might see AI being used in the coming years? ,
[00:09:34] Amr Wardeh, MD: Within the coming years, we are going to see a lot of integration of AI into clinical workflows. So one of the things that we're going to see, if you have ever chatted with your doctor over the healthcare system chat feature, a lot of work is actually being done currently to make ChatGPT or similar tools, draft responses to kind of automate and improve the efficiency of the physician, potentially providing more thorough answers and information to patients.
That's one of the major things that can help both physicians and patients. Because on average, doctors spend about 15 hours per week doing administrative tasks that are not really related to patient care. So if AI can help reduce that administrative time where a doctor can focus more on patients or other tasks that might benefit the patient, then it's a welcome change.
[00:10:44] Host Amber Smith: So patients might feel like they get more time with their doctor because their doctor's not being pulled away to the administrative tasks, maybe?
[00:10:53] Amr Wardeh, MD: Absolutely. And another area where current work is being done is, AI will, for example, be able to listen to your conversation with your doctor when you go in for a visit, and your doctor won't have to sit down facing their computer the whole time and writing notes. They can actually look at you in the eye the entire time and discuss things. The AI will listen and write and document everything in the background. So all of those things would be welcome changes.
[00:11:25] Host Amber Smith: But AI wouldn't be talking to the patient and telling them what they think is wrong with them or what they think should be happening?
[00:11:34] Amr Wardeh, MD: Right. So that is actually, it's a dangerous area right now because a lot of these AI systems are not perfect. These large language models like ChatGPT, they, tend to hallucinate. So if you ask them a medical question, they might answer with a lot of confidence. But, their answer might not be factually correct. In the setting of healthcare, specifically, that's very dangerous.
So autonomously answering or interacting with patients, I don't think we're going to get there soon. You would need a physician to basically review the work of the AI before interacting with the patient. So it'll be AI-aided physician interaction, rather than just AI by itself.
[00:12:22] Host Amber Smith: You used the term, hallucination. Is that the same in AI as it is? -- I think of that as being like a drug-induced stupor of some sort in a person. But in AI, what does that mean?
[00:12:38] Amr Wardeh, MD: That's a great question. We usually use the term hallucination specifically with the large language models like ChatGPT, where sometimes they, if you ask them a question where they're not, they don't really know the answer, they might just simply fabricate an answer that might sound very, like a very good answer to somebody who does not have experience in that topic. And so it's very dangerous to, for example, have a tool like that interact with patients, provide recommendations that might be completely wrong without oversight.
[00:13:19] Host Amber Smith: So does this tie in with virtual healthcare? During the pandemic specifically, we saw this sort of exploded, where you could connect with your doctor virtually, out of necessity because of the pandemic, but it's still in use today. And more so now than it was before the pandemic. Does AI have a role in virtual healthcare?
[00:13:45] Amr Wardeh, MD: I think so. So for example, when you're talking to your doctor over video conference, your doctor might not have to write notes anymore, like we were saying earlier. So everything will bedocumented by the AI so that your doctor can actually spend more time listening to you and talking to you.
Other things that AI might be able to help with is, when you're chatting with your doctor, like we were saying earlier, if the AI can assist the doctor with, let's say, drafting a response to a question that a patient asked, they might be able to save them, let's say, five minute per response, which amounts to a significant time savings per day they can spend on seeing other patients or taking care of more important things.
[00:14:35] Host Amber Smith: Some people are afraid that AI will replace human doctors. Are you concerned about that?
[00:14:44] Amr Wardeh, MD: That's going to take a while to happen. As of now, the tools that we have will not be able to replace physicians. At least I wouldn't be comfortable going to a, let's say, a hospital or to a clinic visit, just have the AI see me and then tell me, "OK, go do this and that," without having a human component. Especially if they're prone to error and hallucinations.
But what we're going to see a lot of, I believe in the near future, is AI that assists doctors, where a doctor might be able to see more patients, might be more productive. And that will definitely happen where maybe you won't need as many doctors to do the same amount of work, but will AI completely eliminate the need for doctors? I think we're still far from that.
[00:15:40] Host Amber Smith: Well, it's certainly a fascinating field and also a little bit scary. I really appreciate you making time to explain it to us, Dr. Wardeh.
[00:15:49] Amr Wardeh, MD: Absolutely. It was a pleasure. Thank you for having me, Amber.
[00:15:53] Host Amber Smith: My guest has been Dr. Amr Wardeh, a radiology resident at Upstate. "The Informed Patient" is a podcast covering health, science and medicine, brought to you by Upstate Medical University in Syracuse, New York, and produced by Jim Howe. Find our archive of previous episodes at upstate.edu/informed. This is your host, Amber Smith, thanking you for listening.