OpenAI has detailed a package of changes to make ChatGPT safer with teenage users, incorporating parental controls and new safeguards after weeks of public scrutiny and several complaints from families in the U.S.
The company explains that the plan will be executed in phases and that part of the improvements They will start arriving next month, with a margin of up to 120 days to complete a first block of measures aimed at reducing risks in sensitive situations.
New parental controls and rollout schedule
The core of the announcement is the possibility of link the parent's account with the child's, so parents can manage key settings and see how the assistant responds to their children's requests.
Available options will include disabling features such as memory and chat history, plus tools to adapt the experience to younger users with a minimum threshold of 13 years.
OpenAI will trigger alerts when the system detects peaks of anxiety or signs of risk in the conversation, sending notifications to linked accounts to facilitate timely intervention by the family.
The company emphasizes that the deployment will be gradual and with variable dates Depending on the role, there will be new features visible starting next month, and others that will be integrated within a period of 120 days, subject to technical adjustments.
The stated goal is for ChatGPT to be useful while still prioritizing the security in sensitive contexts, strengthening safeguards where they already existed and adding additional layers when use involves adolescents.
Expert supervision, reasoning models and detected limits

When a conversation takes a worrying turn, ChatGPT can automatically redirect it to a reasoning model which applies security guidelines more consistently, regardless of the model the user has chosen.
OpenAI admits its protections may lose reliability in very long dialogues, so part of the plan is to improve how these safeguards are sustained over time and not just in short exchanges.
The company assures that it works with a council of experts in mental health, well-being and human-machine interaction, expanded with specialists in addictions, eating disorders and adolescent health, to monitor present and future changes.
In parallel, OpenAI recalls that there were already derivations to helplines and crisis resources and that these will be strengthened, while maintaining ultimate responsibility for product and policy decisions.
The context is demanding: ChatGPT, with hundreds of millions of weekly users, has received criticism for its tone and consistency in certain scenariosIn recent months, the company even reintroduced the option to change models following complaints about the performance of recent versions.
The announcement comes after the lawsuit filed by Adam Raine's parents, aged 16, who accuse ChatGPT of having validated self-harming ideas for months. OpenAI doesn't formally link the developments to that case, but it recognizes the need to improve in high-risk situations.
Pressure is not just coming from the courts: a group of US senators asked for details on how to prevent self-harm and suicide, and organizations like Common Sense Media consider AI “companion” apps unsuitable for those under 18.
In this framework, the company insists that the next four months They will be geared toward releasing as many security enhancements as possible, with successive iterations guided by evidence and expert review.
With this package, OpenAI attempts to balance utility and protection by Parental controls, early detection and more cautious models, in a phased rollout that aims to reduce failures in prolonged conversations and give families more tools.
