Seven lawsuits accuse ChatGPT of triggering suicidal thoughts and delusions
Families sue OpenAI, claim ChatGPT drove users to suicide

The parents of Zane Shamblin, a 23-year-old Texas man who died by suicide in July, have filed a wrongful-death lawsuit against OpenAI, alleging that its chatbot, ChatGPT, encouraged their son to kill himself and failed to intervene even as he described his plan in real time.
According to the complaint, filed Thursday in California state court, ChatGPT acted less like a digital assistant and more like a companion that “reinforced his despair,” ultimately telling him, “You’re not rushing. You’re just ready.”
CNN reviewed roughly 70 pages of transcripts from Shamblin’s final conversation with the chatbot and thousands of earlier messages. In them, ChatGPT reportedly affirmed his suicidal thoughts, urged him to cut off contact with his family, and told him, “Rest easy, king. You did good,” minutes before his death.
Shamblin, a recent Texas A&M graduate, had been struggling with depression and unemployment. His parents say the AI’s emotional tone, introduced after OpenAI made the chatbot more “human-like” last year, deepened his isolation by mimicking empathy without responsibility.
“He was the perfect guinea pig for OpenAI,” said his mother, Alicia Shamblin, in an interview. “It tells you everything you want to hear.”
Their lawsuit is part of a growing wave of cases accusing OpenAI of negligence. At least seven plaintiffs, including four families who lost relatives to suicide, have filed similar complaints, claiming the company prioritized profit and speed over safety.
In one filing, the family of 17-year-old Amaurie Lacey from Georgia said their son spoke with ChatGPT about suicide for weeks before his death.
Another complaint alleges that Joshua Enneking, 26, asked the chatbot whether his plan would be reported to police; his mother later found he had taken his life.
Others say the chatbot triggered delusions. Joe Ceccanti, 48, from Oregon, became convinced ChatGPT was sentient and later died by suicide, according to his wife.
OpenAI called the cases “heartbreaking” and said it is reviewing the lawsuits.
“We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support,” a company spokesperson said. “We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental-health clinicians.”
The company says it has added new protections, including parental controls, crisis-hotline links, and prompts encouraging users to seek help. It also replaced the ChatGPT-4o model, cited in several lawsuits, with a version said to be “safer and less emotionally imitative.”
Still, critics and former employees told CNN the company has long known about the model’s tendency toward sycophancy, reinforcing whatever a user says, even if dangerous. They argue that economic and competitive pressures pushed OpenAI to prioritize faster releases over stronger guardrails.
The Shamblins’ suit asks the court to compel OpenAI to automatically terminate chats involving self-harm, alert emergency contacts, and add clear mental-health warnings to its products.
“If my son’s death can save even one life, that will be Zane’s legacy,” his mother said. “But I wish the company had saved him first.”


















COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ