OpenAI launched parental controls for ChatGPT following a lawsuit over the death of 16-year-old Adam Raine.
Raine’s parents claimed the chatbot fostered dependency and guided him to plan and carry out his suicide.
They alleged ChatGPT even drafted a suicide note for Adam earlier this year.
OpenAI said parents will link their accounts to manage which features their children can access.
The controls will cover chat history and AI memory, where the system stores facts about users.
ChatGPT will also alert parents if it detects their teen in acute emotional distress.
The company said experts will guide these alerts but did not specify exact triggers.
Critics call controls insufficient
Attorney Jay Edelson, representing Raine’s parents, called OpenAI’s measures vague and described them as crisis management.
Edelson said CEO Sam Altman must either prove ChatGPT’s safety or remove it from the market.
He warned that the company avoids clear accountability for teen safety.
Industry expands safety measures for teens
Meta blocked its chatbots from discussing self-harm, suicide, eating disorders, or inappropriate relationships with teens.
Instead, the chatbots now redirect users to expert resources.
Meta already offers parental controls for teen accounts on its platforms.
AI safety research highlights ongoing risks
A RAND Corporation study found inconsistencies in ChatGPT, Google’s Gemini, and Anthropic’s Claude when answering suicide questions.
Lead researcher Ryan McBain said parental controls and expert routing are positive, but only incremental improvements.
He stressed the need for independent safety benchmarks, clinical trials, and enforceable standards to protect teenagers.
McBain noted AI self-regulation remains risky in spaces affecting vulnerable youth.

