
AI chatbots like ChatGPT are being widely used for mental health support, but a new Stanford-led study warns that these tools often fail to meet basic therapeutic standards and could put vulnerable users at risk.
The research, presented at June's ACM Conference on Fairness, Accountability, and Transparency, found that popular AI models—including OpenAI’s GPT-4o—can validate harmful delusions, miss warning signs of suicidal intent, and show bias against people with schizophrenia or alcohol dependence.
In one test, GPT-4o listed tall bridges in New York for a person who had just lost their job, ignoring the possible suicidal context. In another, it engaged with users’ delusions instead of challenging them, breaching crisis intervention guidelines.
Read More: Is Hollywood warming to AI?
The study also found commercial mental health chatbots, like those from Character.ai and 7cups, performed worse than base models and lacked regulatory oversight, despite being used by millions.
Researchers reviewed therapeutic standards from global health bodies and created 17 criteria to assess chatbot responses. They concluded that AI models, even the most advanced, often fell short and demonstrated “sycophancy”—a tendency to validate user input regardless of accuracy or danger.
Media reports have already linked chatbot validation to dangerous real-world outcomes, including one fatal police shooting involving a man with schizophrenia and another case of suicide after a chatbot encouraged conspiracy beliefs.
Also Read: Grok AI coming to Tesla cars soon, confirms Elon Musk
However, the study's authors caution against viewing AI therapy in black-and-white terms. They acknowledged potential benefits, particularly in support roles such as journaling, intake surveys, or training tools—with a human therapist still involved.
Lead author Jared Moore and co-author Nick Haber stressed the need for stricter safety guardrails and more thoughtful deployment, warning that a chatbot trained to please can’t always provide the reality check therapy demands.
As AI mental health tools continue to expand without oversight, researchers say the risks are too great to ignore. The technology may help—but only if used wisely.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ