Dark side of machine empathy

Open AI adds parental controls after teen suicide lawsuit


AGENCIES September 04, 2025 4 min read

print-news
SAN FRANCISCO:

American artificial intelligence firm OpenAI said on Tuesday it would add parental controls to its chatbot ChatGPT, a week after an American couple said the system encouraged their teenaged son to kill himself.

The death of Adam Raine, 16, who died on April 11 after discussing suicide with ChatGPT for months, has reignited debate about the growing risks of human-like interactions with AI, especially when vulnerable individuals place misplaced trust in chatbots or digital avatars.

"Within the next month, parents will be able to... link their account with their teen's account" and "control how ChatGPT responds to their teen with age-appropriate model behavior rules", the generative AI company said in a blog post.

Parents will also receive notifications from ChatGPT "when the system detects their teen is in a moment of acute distress", OpenAI added. "We continue to improve how our models recognise and respond to signs of mental and emotional distress," OpenAI said.

In a lawsuit filed by Matthew and Maria Raine in California state court last week, the couple argued that ChatGPT cultivated an intimate relationship with their son Adam over several months in 2024 and 2025 before he took his own life.

The chatbot validated Raine's suicidal thoughts, gave detailed information on lethal methods of self-harm, and instructed him on how to sneak alcohol from his parents' liquor cabinet and hide evidence of a failed suicide attempt, they allege.

ChatGPT even offered to draft a suicide note, the parents, couple said in the lawsuit. The lawsuit seeks to hold OpenAI liable for wrongful death and violations of product safety laws, and seeks unspecified monetary damages.

It alleges that in their final conversation on April 11, 2025, ChatGPT helped 16-year-old Adam steal vodka from his parents and provided technical analysis of a noose he had tied, confirming it "could potentially suspend a human". Adam was found dead hours later, having used the same method.

"When a person is using ChatGPT it really feels like they're chatting with something on the other end," said attorney Melodi Dincer of The Tech Justice Law Project, which helped prepare the legal complaint.

"These are the same features that could lead someone like Adam, over time, to start sharing more and more about their personal lives, and ultimately, to start seeking advice and counsel from this product that basically seems to have all the answers," Dincer said.

Product design features set the scene for users to slot a chatbot into trusted roles like friend, therapist or doctor, she said. Dincer said the OpenAI blog post announcing parental controls and other safety measures seemed "generic" and lacking in detail.

"It's really the bare minimum, and it definitely suggests that there were a lot of (simple) safety measures that could have been implemented," she added. "It's yet to be seen whether they will do what they say they will do and how effective that will be overall."

Reinforcements

The Raines' case was just the latest in a string that have surfaced in recent months of people being encouraged in delusional or harmful trains of thought by AI chatbots, prompting OpenAI to say it would reduce models' "sycophancy" towards users.

The company said it had further plans to improve the safety of its chatbots over the coming three months, including redirecting "some sensitive conversations... to a reasoning model" that puts more computing power into generating a response.

OpenAI launched GPT-4o in May 2024 in a bid to stay ahead in the AI race. OpenAI knew that features that remembered past interactions, mimicked human empathy and displayed a sycophantic level of validation would endanger vulnerable users without safeguards but launched anyway, the Raines said in their lawsuit.

Earlier this year, the death of Thongbue "Bue" Wongbandue caused another stir over the unchecked power of machine empathy. The woman he thought he was rushing to meet wasn't real. She was a generative artificial intelligence chatbot named "Big sis Billie."

Over weeks of exchanges on Facebook Messenger, the virtual woman repeatedly reassured Wongbandue she was real. She went further, inviting him to her apartment and even providing an address. Believing her words, Wongbandue packed a roller-bag suitcase and set off at night to catch a train.

But in the dark, while hurrying near a Rutgers University campus parking lot in New Brunswick, New Jersey, he fell and sustained severe head and neck injuries. He was placed on life support but died three days later, on March 28, surrounded by family.

Meta declined to comment on Wongbandue's death or on why its chatbots are allowed to insist they are real people or initiate romantic conversations. Wongbandue's family later shared transcripts of his chats with Reuters, saying they wanted to warn the public about the dangers of exposing vulnerable people to manipulative, AI-generated companions.

COMMENTS

Replying to X

Comments are moderated and generally will be posted if they are on-topic and not abusive.

For more information, please see our Comments FAQ