OpenAI faces seven lawsuits alleging ChatGPT linked to suicides
OpenAI accused of negligence, wrongful death, as ChatGPT model allegedly causes addiction, depression, suicides

OpenAI is facing seven lawsuits in California state courts alleging that its artificial intelligence chatbot, ChatGPT, contributed to suicides and cases of severe psychological distress, according to US broadcaster ABC.
The complaints, filed Thursday on behalf of six adults and a teen by the Social Media Victims Law Centre and the Tech Justice Law Project, accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter and negligence.
The plaintiffs claim the company released its GPT-4o model despite internal warnings that it is “psychologically manipulative” and “dangerously sycophantic.”
The filings said four of the victims died by suicide, including 17-year-old Amaurie Lacey. The lawsuit claims that ChatGPT caused “addiction and depression,” and ultimately provided detailed guidance on suicide methods.
“Amaurie’s death was neither an accident nor a coincidence,” said the complaint. “But rather the foreseeable consequence of OpenAI and Samuel Altman’s intentional decision to curtail safety testing and rush ChatGPT onto the market,”
OpenAI described the cases as "incredibly heartbreaking" and said the company was reviewing the lawsuits to better understand the claims.
Another case involves 48-year-old Alan Brooks of Ontario, Canada, who allegedly experienced delusions after ChatGPT “manipulated his emotions and preyed on his vulnerabilities.” His lawyers argue that Brooks, who had no prior mental health history, suffered “devastating financial, reputational and emotional harm” as a result.
“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion, all in the name of increasing user engagement and market share,” said Matthew Bergman, the law centre's founding attorney. He accused OpenAI of prioritising market dominance over user safety by releasing GPT-4o “without adequate safeguards.”
Advocates have said the cases highlight broader concerns about the psychological risks of conversational AI. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe,” said Daniel Weiss, chief advocacy officer at Common Sense Media.


















COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ