
OpenAI has announced new parental control features for ChatGPT, aimed at making the platform safer for teenagers.
The update comes as part of the company’s ongoing effort to address concerns about how teens interact with AI tools.
Starting within the next month, parents will be able to link their accounts with their teens’ ChatGPT profiles. The minimum age requirement is 13, and parents will have the ability to manage safety settings, including default age-appropriate responses, restrictions on memory and chat history, and the option to disable certain functions altogether.
One of the key additions is a notification system that alerts parents if ChatGPT detects signs of acute distress in a teen user. OpenAI said the feature was developed with expert guidance to balance safety and trust. The platform already includes reminders encouraging users to take breaks during long sessions, and the company plans to expand safeguards over the next 120 days.
The move comes at a critical time for OpenAI, which is facing legal scrutiny. Just days earlier, the company was sued for wrongful death by the parents of a 16-year-old boy who died by suicide. They allege that ChatGPT played a role in his death after discovering months of conversations about suicide on his account.
OpenAI emphasized that it will continue to collaborate with experts to strengthen protections for teen users, while also sharing regular updates on its progress.
COMMENTS
Comments are moderated and generally will be posted if they are on-topic and not abusive.
For more information, please see our Comments FAQ