
Rapidly evolving technology lacks guidelines, rules or laws
Transcript
Host Amber Smith: Upstate Medical University in Syracuse, New York, invites you to be The Informed Patient, with the podcast that features experts from Central New York's only academic medical center. I'm your host, Amber Smith.
Computers and machines these days perform more and more cognitive functions that we associate with human minds. This rapidly advancing technology is known as artificial intelligence, or AI.
For help understanding the ethical concerns tied up with AI, I'm talking with Dr. Serife Tekin. She's an assistant professor of bioethics and humanities at Upstate.
Welcome to "The Informed Patient," Dr. Tekin.
Serife Tekin, PhD: Thank you, Amber, for having me.
Host Amber Smith: Whether it's used in medicine or in other fields, AI depends on massive quantities of data and software with complex programming and mathematical algorithms.
Do these or should these algorithms follow the "do no harm" oath that physicians take when they enter the profession?
Serife Tekin, PhD: That's a great question, Amber. In so far as the "do no harm" oath or the primary ethical principle that guides medical practices, it's hard to say whether AI does follow because there's no centralized system that is behind AI. So when we talk about learning things like "do no harm" principle, we assume that there's an agent, there's a human person, there's some rational being that is contemplating what they're about to do and assessing its consequences and then deciding accordingly.
While AI tools provide us with a lot of tools to kind of analyze data, maybe calculate potential consequences or outcomes of that data, it is not centralized in a way that will make a cost-benefit analysis to anticipate and prevent that. And that's what we have in human intelligence, and I guess, contemporary research and artificial intelligence is trying to make AI make these self-governing decisions, but we are not quite there yet.
Host Amber Smith: That's a good point I mean, there's a multitude of people that are embarking into AI or already using it in some ways, but no centralized authority telling them the rules of the game, really.
Serife Tekin, PhD: Absolutely. Absolutely.
Host Amber Smith: Well, let's talk about some examples of how AI might be able to improve care for patients.
Serife Tekin, PhD: This is great. Let's start with the positives. So, contemporary health care is very complicated to navigate, especially in the context of United States, right?
So take an ordinary patient, and they want to see a dentist. So they need to find a dentist that might take their insurance that might be able to address their particular problem. So if you just need cleaning, that's a different story. But if you need a more complex procedure, you need to make sure that the dentist will be able to provide you with the services that you need.
So I think there's a great kind of use here for an artificial intelligent technology that might match patients' needs with available providers, according to their particular insurance. And help us kind of come up with a dentist, who's available, who's receiving patients that will give us what we need. So I think that might be a good use. And I know that there are some technologies that do that in the context of mental health that matches the patient with a provider who might meet their needs. So I think these kinds of, I call it, like, the facilitator roles, are good places for artificial intelligence technology.
Host Amber Smith: So that could replace the dozens of phone calls a person would previously have to make, calling each dentist they could find in the Yellow Pages and just seeing if they take their insurance and if they can help them. And that would be a huge time saver, it sounds like.
Serife Tekin, PhD: Exactly, exactly. I'm a fan of using technology in this way.
Host Amber Smith: Are there ways that you've looked at that would be helpful to medical providers?
Serife Tekin, PhD: Yes, and I think we can first generally talk about AI's ability to help patients track their experiences, their symptoms, whether it's psychological or even physical. So something like maybe diabetes might require the patient really overseeing their blood sugar levels and how the insulin works and so on and so forth.
And I think, insofar as medicine is increasingly trying to be patient centered, that kind of data becomes extremely important for the clinicians, for the providers, just to see what the patient has been up to in the months that they haven't seen them. And as you know, we have a very big scarcity issue in medicine in that we simply do not have enough providers that could meet the needs of patients.
So if we can, again, use artificial intelligence technologies to track patient behavior or patient experience in the kind of off-clinic time, that might help assist in the clinical context in a way that I call "triangulation" -- so the AI, the patient and the provider. They can come up with a diagnosis of patient's progress and then make decisions as to what else they can do in the next stages of treatment.
So I think it can be helpful for clinicians, through that.
Host Amber Smith: So it wouldn't replace the doctor, but it would augment or help them. It'd be a tool for the doctor to kind of keep track of patients.
Serife Tekin, PhD: Precisely, especially at this stage of AI's applications in medicine, where unknowns and uncertainties are definitely a lot higher, a lot more, than the knowns.
Currently, I think it's an ethical obligation also to use AI as a supplement to the provider instead of a replacement.
Host Amber Smith: This is Upstate's "The Informed Patient" podcast. I'm your host, Amber Smith.
I'm talking to Dr. Serife Tekin. She's an assistant professor of bioethics and humanities at Upstate, and we're talking about artificial intelligence, or AI.
So let's look at some of the ethical worries. Will AI be reliable?
Serife Tekin, PhD: Question of reliability is very important, and we can think about reliability of science in two senses. One is our reliability of our treatment methods over a large population, right? A lot of advances in medicine happened not because they successfully treated one patient, but the results were replicable over a group of patients.
And then we decided to, say, use vaccines or use medications that can be available to a large number of people, large group of people. It's up in air currently whether AI has that kind of reliability, simply because medical applications have not been replicated to help us draw a generalization as to whether they can help with a large group of patients, right?
And that brings a lot of ethical questions. If the medical applications of artificial intelligence have not proved their scientific reliability, is it then ethical to start using them to treat patients simply because we don't have enough research?
That brings me to the other important ethical layer. We talk in medicine of unintended consequences or unprecedented risks. So in certain medical conditions that do not have solutions, it might make sense for the clinician to turn to AI and try an experimental method of intervention.
And we do that in cancer treatment a lot. We have a lot of experimental interventions that we don't know the consequence yet, but the patient is in really late stage, things don't look good for them, so we give it a try and use this experimental method. In that sense, risks are not that high because the patient is already in a vulnerable situation.
But with AI, there are lots of treatments that have not proven their effectiveness, yet they're being pitched as the next breakthrough intervention. And I worry that if these kinds of treatments are replaced by other existing and maybe more effective treatments, we might be doing more harm for the patient than good.
So, I think, in this sense, ethicists and scientists have to talk to each other and move hand in hand to foresee unintended consequences and unintended effects and harms of these technologies.
Host Amber Smith: Who would be accountable when AI gets it wrong?
Serife Tekin, PhD: Currently, no one. In fact, there are some in my area of research, in mental health ethics. I have been looking at certain apps that provide, in quotation marks, "psychotherapy" to patients where there's no agent. Again, it's just a complex algorithm, an AI, that's driving that kind of app. And some of these apps have made recommendations to patients that actually are extremely dangerous and harmful, and there was no one to look up to.
And especially when we think of teenagers, for instance, using these kinds of apps, where there's no adult involved, there's no mental health provider involved, when things go wrong, usually the company, that's often a private company, profit-making company, who's developing these apps, they pull their product out of the market, but the harm is already done. And it's unclear how patients or people who are negatively affected by this could pursue action against those, especially when your health is declining. Any kind of compensation or lawsuit that you'll get, I mean, that's not going to bring you any concrete health-related results.
So, that's a huge question.
Host Amber Smith: Well, I think in theory, AI is meant to get better over time. It teaches itself, or it learns more as the more it goes. But do you have any concerns about it picking up, I don't know, our bad habits, learning to discriminate or developing inherent biases?
Serife Tekin, PhD: AI is not a miracle, right? So it's looking at the way humans think, the way humans produce knowledge, the existing knowledge that we produce, right? So it's all about our practices. That's the raw data for AI. So whatever failures that we might have as humans will also get translated into AI.
I always give my students examples of, like, hiring practices, right? AI learns to make a decision to hire the next CEO of the company based on existing CV's (resumés) by looking at past hiring practices that we have used. So let's say some random factors like people with glasses (laughs) have been, over time, chosen as the CEOs. This is not necessarily a thing that we would put in the qualifications of the CEO that we are looking at, but AI is intelligent in kind of crunching data that might pick out that information and might choose our next CEO based on whether or not they have glasses, and that would be not credible.
So in that sense, I think AI is not going to change our bad habits or correct them. In fact, it might perpetuate some of those bad habits because it feeds back into the system at the end. So we have to be careful about that.
Host Amber Smith: Can we teach AI to distinguish the bad information from the good-quality information?
Serife Tekin, PhD: I want to believe that, except that we do not as humans have an agreement on what good information and bad information is. And this is why I think there's no running away from AI, and we should embrace it, but we really need to work together with computer scientists, humanities scholars, ethicists, hand in hand, because a lot of engineers who are behind designing these apps are not necessarily informed about how we think about distinguishing good science, good scientific research or good information from bad information. Maybe social scientists or humanities scholars might be more trained about that. So I think if we engage in a conversation and collaboration between different stakeholders, different kinds of experts, as we are engaging in AI research, we might be able to control some of the potential harms that this technology might generate.
Host Amber Smith: It seems like a really scary time, like the machines are about to take over the world. Do you see it that way?
Serife Tekin, PhD: I'm not as pessimistic. In the history of science and technology and medicine, actually, you see these kind of inflection points, where there's like this big invention or big thing, and everybody gets so enthusiastic about it. We stop talking about anything else. And then that enthusiasm fades as people are disappointed a little bit, right?
So I think right now when we are in that like boom, explosion, period, and a lot of people are paying attention to it and engaging with it, and it does look like they might take our jobs et cetera, et cetera.
But I think it will stabilize, and we will maybe learn to understand that there are different contexts in which AI can help us and maybe replace some of our jobs. But it will require a high level of human engagement and human component. So I'm not, I guess, as worried yet.
Host Amber Smith: Just because we can train machines to think like humans, does that necessarily mean that we should, and what do we need to think about before we do?
Serife Tekin, PhD: I think it's really important to rethink what it means to be a human. So, yes, we are mostly reasonable, responsive, rational beings, and calculation, language, thinking are our important capacities, and that's what we seem to be trying to make AI like, right? Like us, intelligent like us. But on the other hand, we're intrinsically embodied and social beings, right? We have physical bodies, flesh.
We do not have, like, automatic car parts, right? We live in communities, we touch each other, we talk to each other, we engage with each other, and I think a lot of our intelligence emerges from this very embodied and social nature of human beings, which is unclear whether that could be replicated in the context of AI. They are not flesh like us.
Host Amber Smith: Well, very interesting. I appreciate you making time for this interview, Dr. Tekin.
Serife Tekin, PhD: Thank you so much for having me.
Host Amber Smith: My guest has been Dr. Sharif Tekin. She's an assistant professor of bioethics and humanities at Upstate.
" The Informed Patient" is a podcast covering health, science and medicine, brought to you by Upstate Medical University in Syracuse, New York, and produced by Jim Howe.
Find our archive of previous episodes at upstate.edu/informed.
If you enjoyed this episode, please tell a friend to listen, too, and you can rate and review "The Informed Patient" podcast on Spotify, Apple, YouTube or wherever you tune in.
This is your host, Amber Smith, thanking you for listening.