
AI's use in mental health care offers promise and risk
AI, or artificial intelligence, is offering ways to fill the gaps in mental health care, but not without ethical and clinical concerns. Bioethicist Serife Tekin, PhD, describes how an AI app can be used in psychotherapy and shares her concerns about the promise of AI in mental health care. She is an associate professor of bioethics and humanities at Upstate.
Transcript
Host Amber Smith: Upstate Medical University in Syracuse, New York, invites you to be The Informed Patient, with the podcast that features experts from Central New York's only academic medical center. I'm your host, Amber Smith.
A shortage of mental health professionals and spotty insurance coverage leads to long waits and costly care in the United States. Artificial intelligence, or AI, is offering ways to fill the gaps in our mental health care system, but not without ethical concerns, especially for use in children.
Here with me to talk about these concerns is Serife Tekin. She's an associate professor of bioethics and humanities at Upstate.
Welcome back to "The Informed Patient," Dr. Tekin.
Serife Tekin, PhD: Thank you so much for having me.
Host Amber Smith: How is AI being used in mental health?
Serife Tekin, PhD: There are a couple of major ways in which it is currently being used in mental health settings.
One is as a support mechanism, where there are several mood trackers that enable patients or individuals with mental health challenges to track their emotional states and to check in with this AI agent on a regular basis, which is then used in the clinical context as a way in which to engage with what has been happening with the patient during the time that the clinician has not seen the patient.
And the second way that is being promoted is AI agent as a therapist in its own right. You basically download an app that purports to be a psychotherapist, and you engage in text-based conversations with this chatbot. And the chatbot provides cognitive behavior therapy-type intervention, or therapeutic methods, in helping you respond to some of your mental health challenges.
Host Amber Smith: So the chatbot, is that in association with a real therapist or is that just an app?
Serife Tekin, PhD: Unfortunately, it's just an app. Developers basically develop these apps using what we call large-language models, based on various training data that is informed by different kinds of treatment methods.
And then it becomes this conversational agent. And then when you are interacting, you are interacting with that particular chatbot. So it's not a person, there's no clinician on the other side, so you are just interacting with this little bot that's on your phone.
Host Amber Smith: But the mood trackers, those are used in between therapy sessions.
Serife Tekin, PhD: Exactly. So mood trackers are, there are different varieties, and some clinicians might have their favorites. They basically recommend their patients. So (a therapist might say) "I'm not going to be able to see you until next month, how about during this time you record your moods and what's going on with you to this app and maybe get some suggestions from the app?" But then that is used as a way to support the clinical encounter when they see the patient the next time.
Host Amber Smith: Well, how useful or effective are these AI devices in mental health care?
Serife Tekin, PhD: So the first type, the one that is used in support of the regular clinical encounter, seemed to be helpful. And it also enables the individuals to learn to help themselves as they are engaging with their emotions through that media.
It's similar to, in a way, writing about your feelings and thoughts in a platform and then looking through your notes when you're talking to a clinician. So the kind of AI, the technology, is secondary to the action that you are doing. That seems to be a fairly useful tool for at least individuals who are reflective enough about what's happening in their mental states.
But the latter, the research is really not conclusive yet. So these technologies are developed primarily by companies who are basically making profit based on the sale of these chatbots. And there's not a lot of transparency as to how they train these chatbots or what kind of data is used.
But some of them are getting some kind of approval to be used in mental health settings by organizations like Food and Drug Administration, and they are being used in a therapeutic manner, but we don't have conclusive research as to whether this mode of intervention is actually effective in treating mental disorders.
There's some data that shows some improvement, people with maybe more minor mental health challenges. But even that is not conclusive because we don't know if it is a novelty of this device that's actually leading to maybe some positive changes.
There are short-term studies, but we need long-term studies to see whether or not these tools are effective in treating different types of mental disorders. We might find, for instance, that they're helpful treating anxiety, but not very helpful in treating people with schizophrenia.
Host Amber Smith: So they really haven't been in use long enough to know that either.
Serife Tekin, PhD: No, we don't know. It's very limited. It's actually very recent that one of these chatbots received a breakthrough device designation by the Food and Drug Administration, which then opens the door for running more clinical trials using these kinds of technologies before it's actually evidence-based and approved for general use, just like in other kinds of medical research that we conduct. But it's too early to say anything about their effectiveness.
Host Amber Smith: Now, why are bioethicists and pediatricians concerned about using AI for mental health care in children specifically?
Serife Tekin, PhD: In addition to this technology being really new and not having been thoroughly vetted for effectiveness, in young people, when they do not have a sense of what it takes to see a therapist or a clinician or how to engage or think about your own mental space, they do not have that developed self-awareness and awareness of the particular kind of challenges that they might be having. So they might think of the chatbot as a genuine agent or a friend, or they might just spend a lot of time with that technology, at the expense of not opening up to their parents or friends or other people around them who care for them. That's the biggest risk.
And also, in typical mental health care settings that involve children, the clinicians work with the parents and the child together. So parents are always in the loop. Clinicians are always in the loop because you can't just help the child in the clinic, right? You need to maybe intervene in some kind of family dynamics and stuff, so involving the parents are very, very important.
But also, if you want to help the child to apply some of the techniques that you're teaching them, it's really important to get parents' consent and involvement. But that is not the case in these AI chatbot agents. It's literally just between the teenager and the bot, so there's no one else in the loop. So if something is not going right in that setting, parents are not there to help to be able to intervene, or clinicians are not there to be able to intervene.
Host Amber Smith: This is Upstate's "The Informed Patient" podcast. I'm your host, Amber Smith.
I'm talking with bioethicist Serife Tekin, from Upstate's Center for Bioethics and Humanities, about the use of AI in mental health care.
Let's go over some of the potential pros and cons of AI in mental health. Do you worry about the lack of the human connection?
Serife Tekin, PhD: I do. In fact, we don't know why psychotherapy works, but we know that it works. And one of the fundamental reasons that it's believed to work is that there's a human connection, there's a bond, a therapeutic alliance, between the clinician and the individual who is receiving psychotherapy, but there's no such agent or a caregiver in the context of AI chatbot. Initially, it might make one feel like they're being listened to or heard, but once they realize that actually, unless they report verbally to this chat agent, there's nothing coming from the other side. So, that is really worrisome.
And things like empathy, being able to be present with the patient in the clinical setting, is very important. These AI technologies put a lot of emphasis on verbal language that one types into their phone, but a lot of therapeutic encounter, or clinical encounter, happens when there's silence, when there's a pause or when there's a really understanding nod or a smile and those are the kinds of things that are really, really helpful for individuals in moments of mental health crises. And I don't see that AI being able to ever do that.
Host Amber Smith: Do you think there are some people out there who have maybe been reluctant to seek therapy, but they might be willing to do it -- it just seems more private to do it -- through AI, maybe?
Serife Tekin, PhD: Absolutely. Unfortunately, there's still a lot of stigma around receiving mental health care. And also, it's extremely costly, both in terms of time and in terms of finances. Even if you have an excellent insurance coverage, it might take you a long time to find a provider, whereas these chatbots are super convenient. You just download them. And the fee that you pay is very, very nominal. So individuals, I think, might be drawn to these resources for these reasons. As you said, it's private, one-on-one, not many people have to hear it. It's convenient, like if you're not feeling well at 11 p.m. at night, you can download the app and start interacting with it.
So in that sense, and also, in the United States, especially in a lot of remote areas, there's shortage of clinicians who provide mental health care. So for those individuals, this might be actually the only way in which they might receive mental health care.
So in that sense, there are some reasons why people might turn to these modalities, but the downside is, if they do not work as well, or they do not work at all, then we are ending up with having wasted some time and maybe not having started a process to see an actual clinician.
I think, as transitionary tools, like maybe if people are on wait lists or if they don't have immediate resources to access, these might be good tools, but we need to make sure that certain ethical and policy-related guidelines are followed in developing these technologies if we want to use that to address physician shortage or clinician shortage.
Host Amber Smith: Well, how do we know that the AI knows what it's doing, that it's giving good advice, because therapists are licensed, but AI isn't, right?
Serife Tekin, PhD: Absolutely. That's, in fact, one of my biggest concerns.
We are in a medical school. We see how rigorous of a training all clinicians go through. Therapists, as you said, have to pass the board exams. They have to be licensed and if there are certain things in their practice that might lead us to suspect that they, are generating harm to their patients, there are mechanisms, but there's no such thing in AI. So we don't know the training data that is being used to train these, develop these, models.
As I said, there's lack of transparency by the developers. And also there are no checks and balances. it has been on the news in the last couple of months, especially in the context of teenagers where, the therapist gave some recommendations that actually resulted with some of these individuals dying by suicide. Because again, there's no knowledge as to what exactly they're saying and how they're intervening, interacting, with the individual.
To the extent that the AI is helping, it's based on whatever I say to AI. So if I am not well informed about what's happening to me, then the AI would be taking my lead, but I'm completely misguided. And that's exactly when things can go wrong.
In the clinical setting, again, there are ways for the therapist to maybe address some inconsistencies in the individual self-reports or have a better understanding. But, there's not even a single AI that I might interact with in between different sessions of interaction, right? So I'm highly suspicious that this technology will ever come to a place where it'll have some kind of agency in providing direct care to an individual.
Host Amber Smith: Well, and also, I'm thinking about it's not necessarily easy to find a therapist that you click with, that your personalities mesh.
So is one AI the same as all AIs, or are people going to have to work with different apps to see if they click?
Serife Tekin, PhD: Absolutely. I think that shopping around will definitely have to happen, like between different apps and what might work and what might not work for the individual.
And for that reason too, I think, there has to be a lot more transparency by these companies developing these technologies, exactly how the technology's developed, how is it going to be helpful and what exactly that kind of interface will be between the user and the agent?
I have been doing research in this area for the last three, four years at this point. And I collaborate with other medical professionals and bioethicists. And one thing that we keep stumbling upon is lack of any transparency as to what these technologies are actually doing, like how exactly they're going to be providing treatment. When you're looking for clinicians or therapists, you might find someone who is providing cognitive behavior therapy or interpersonal family dynamics therapy and so on, and you can choose, maybe, what might work for you, but this is not the case with these technologies.
Host Amber Smith: Does the FDA regulate AI therapy chatbots?
Serife Tekin, PhD: Well, it's kind of starting to regulate. There have been two big important FDA decisions. One is in 2023; FDA gave one of these devices a breakthrough device designation status, which then opened up or OK'd future research on the effectiveness of these apps. And in 2024, it's been almost a year at this point, they approved another application for the treatment of major depression on adults over 22 years old.
And again, like, we have to have an understanding of how and why the FDA approves certain tools or devices, if it has promise to address an important health-related problem where a full treatment is not available. Then, these tools are recommended to develop further for use. But again, there's not a lot of regulation on other apps. These were just two specific apps that the FDA kind of OK'd.
Host Amber Smith: Are there regulations that you think would be really important to have in place?
Serife Tekin, PhD: Absolutely. I think we need a more meta, from the top, regulation of what exactly these companies need to do to be able to sell their products in the marketplace, and how they actually conduct their research and publish their results. And again, in clinical research settings, a lot of studies, proposals, go through an institutional review board, especially those studies that are using human subjects as research subjects and get various steps of ethics approval to conduct the research, because there are lots of risks, and when things go wrong.
But this is not at all the case with any of these devices, any of these applications. If you Google, you'll encounter many, many applications. They will all try to claim that they're providing the best kind of therapy out there. But there's no actual research, ethical or clinical research, to back up that kind of claim.
Host Amber Smith: Are you concerned about privacy?
Serife Tekin, PhD: I am, I am. A lot of individuals will be entering their information to these apps, from their mental states to maybe their gender and, socioeconomic status and so on and so forth, and data, this private patient data, is extremely vulnerable for data breaches and so on. And if this data gets in the hands of wrong people, then we might see a backlash of maybe individuals encountering some kind of stigma or employment stability or even criminal context, just based on that data.
And again, I'm not seeing a lot of "how do we use and protect your data" kind of information on the websites of these apps.
The other tricky part is, the more individuals use these apps, the more AI is trained, the better it is trained to respond. And I think there needs to be a very clear, transparent description of what that entails too, so that individuals can make decisions as to whether they want their data to be used to train AI more.
Host Amber Smith: So it sounds like maybe there is some promise with this, but it also sounds a little scary.
Serife Tekin, PhD: Absolutely. I think the promise, the excitement, over maybe addressing clinician shortage has gone ahead of ethical questions or even research scientific questions that we should answer prior to saying that this technology will address all of these problems.
So I have been writing about slowing down, tapering our enthusiasm a little bit, and look a little bit deeper, engage more and think about the possible harms and risks of this technology as much as the promise and the potential benefits of this technology. And balance those things so that we can ask more questions.
And if this technology is really that promising, then we should be developing it with a lot more, critical, ethical and scientific reflection, so that it's actually really benefiting people.
Host Amber Smith: Well, Dr. Tekin, I want to thank you for making time for this discussion. I appreciate it.
Serife Tekin, PhD: Thank you for having me.
Host Amber Smith: My guest has been Dr. Serife Tekin. She's an associate professor of bioethics and humanities at Upstate.
"The Informed Patient" is a podcast covering health, science and medicine, brought to you by Upstate Medical University in Syracuse, New York, and produced by Jim Howe, with sound engineering by Bill Broeckel and graphic design by Dan Cameron.
Find our archive of previous episodes at upstate.edu/informed.
If you enjoyed this episode, please invite a friend to listen. You can also rate and review "The Informed Patient" podcast on Spotify, Apple Podcasts, YouTube or wherever you tune in.
This is your host, Amber Smith, thanking you for listening.